Fidelma Russo & Latha Vishnubhotla, HPE | HPE Discover 2022
>> Announcer: theCUBE presents HPE Discover 2022 brought to you by HPE. >> Welcome back to Las Vegas, everybody watching theCUBE's coverage of HPE Discover 2022 in Las Vegas, this is day two, my co-host John Furrier and I, are pleased to welcome Fidelma Russo, who's the CTO of HPE, somehow newly minted CTO and Latha Vishnubhotla, who's the Chief Platform Officer of HPE, a lot of talk about platform, ladies, welcome to theCUBE, great to see you. >> Thank you >> Good to be here. >> So Fidelma, your awesome keynote yesterday really, it's starting to become clear, you're building out a platform, your job is to create that platform so that others can build value on top of it, maybe describe sort of how you see the role. >> Yeah, so it's a bit of non-traditional CTO role, you know, I have the CTO innovation aside but I also am building the platform and also the security piece to the platform. So, because you guys know me for a long time, I love to build products and so this is I get to build the platform and then I work with all of the different business units on taking their offers. First of all, kind of looking at, do they make sense? You know, are they adding to the platform? Do we have overlap in the portfolio and how do they come onto the platform and how do we make sure we have a consistent user experience all the way from the offer all the way through, you know, from the life cycle of that particular offer from, you know, just browsing the offer to actually using the offer to getting support on the offer. >> And a lot of that is ecosystem enablement, right? I mean, you're looking at that as well as do you consider that part of the portfolio in terms of some of those overlap discussions and where you leave off and they pick up on? >> So we have, you know, HP, I mean, we have, it's a partner first organization that helps us get our breadth and our scale across, you know, the globe and so basically the partner, when I say customer, I kind of mean partner as well and so, the partners, you know, we are working closely with a number of them to build tightly into the platform exposing our APIs, and then in terms of other areas, we'll have our marketplace where they may not be as tightly coupled, but they'll be in the marketplace and you can consume from the marketplace, so, it's a width and through partners. >> And Latha, interesting title, Chief Platform Officer, not a common title, so, you guys are partners in crime in this effort or maybe you could describe your role in a little bit more detail. >> Yeah so, as Fidelma mentioned, when we bring all these services and offers on top of the platform, what are the capabilities that we need to offer so that they're consistent, the customer experience, the partner experience is consistent, from the time they browse to buy it, operate it and you know, maintain it, throughout the journey, the experience is kept consistent for all the offers. For that, we need a platform, you know, otherwise, you know, everybody will build their own experience and for customer to operate hundreds of locations, it gets complex. >> The question on the platform I want to ask is, in this modern era 'cause we've seen the platform wars going back the old data center days where platform and tools are out there, very monolithic in some cases, as you have more of a distributed computing market developing which we all see with the edge and on-premises and public cloud cloud to edge, as you guys call it, what does the modern platform look like? What are you guys enabling? Because you have partners building on top of it, you have to enable value and their customers is your customer, so, what is the enablement that you're looking for? What are some of the first principles that you guys think about when you look at this modern platform on top of now Cloud 3.0, 2.0, whatever you want to call it, this next generation, what are some of the areas that you see that are key for HPE to build into the platform? >> Yeah so, first of all, API first approach is very key so that our ISVs and partners can develop on top of it, APIs are very key and security, building security from hardware, all the way to the services, the whole stack, integrating security into that and providing the ease of use features on top of it, whether it is by experience or having a unified support experience. So again, it all goes back to when you have hundreds of locations, how do you visualize what cases are running in your locations? What cases need to be fixed in terms of the infrastructure and all that? The wellness dashboards, all of that bringing onto the platform, so the customer can go through a day zero, day one, day two journey on the platform. >> Yeah, and it's all data's in there and the scalability of data with machine learnings here, I want to go to the next step and ask you guys, what do you think about the notion of integration? Because if you believe that the software industry has been, I won't say taken over, but, is driven by open source, open source is where all the action is but that's not the end game, scale, compute, and integration, you mentioned API first, that's just the beginning, the partner's got to integrate, they're going to talk to each other, you got security, how do you guys think about that? Because that's the top discussion right now, okay, I got Kubernetes clusters, I got Docker containers, I'm going to leverage all that open source into the platform but I got to integrate. >> So, you know, in terms of open source, I mean, we embrace open source, you know, our security IP, SPIFFE and SPIRE, so we are very active in that particular area and so we intend to engage in open source where it makes sense. And so, and enable people to tightly, like to easily integrate onto the platform with their preferred open source, you know, whatever they're looking for. And then the piece about that is what we want to provide is orchestration. So what are the hard things about open source? It's great to take something and you put it in and it's like, now you can't really use it, okay, and so how do we provide that consistent orchestration, that consistent automation and do it in a way that, because it's on a platform, you can now access it in a common way no matter where you are and so that's kind of our approach to it. >> I want to ask you guys about the announcements that you made yesterday Fidelma in your keynote, there were four key components, four pillars I guess you'd call 'em, the first one was core services. I want to comment, you tell course correct if I don't get it right but core services via a single common URL you showed cloud-like console, that's how we should be thinking about it? >> That's our platform , it's Cloud Console. >> Great, and then operating use, you got operational services, it is like deploy and provision, it's kind of the sys admin tools to do that, roles and personas, I saw that as, okay, resonates, it's like, I'm going to talk to the different personas, what are those personas? >> So, I mean, if you come in and you are a developer, you should be interested in cost analytics, but you're probably not really thinking about it. And so what that does is, so if you come in and you're a developer, over time, we will understand your history, we will understand your persona and we will curate your view to that persona, okay? So if I'm a finance person and I'm looking at my cost analytics, and I want to understand where my spend is and what the spend is on, you can also take a curated path through the Cloud Console so you just see what it is you want to see. >> Makes sense, you don't see all the extraneous data that you don't need, and then commerce, is that like billing or is that monetization or both? >> It's both, and so today it's billing and we've also brought the buy experience on there, so you can now go to the console, you can do your first purchase there, equally well, you can do a refresh of a subscription because, I personally think that most people don't do their first purchases there, but they will do their next purchase and they're, you know, refreshing their subscription and then you get all of the billing through and the visibility into your bills through the platform. >> And what's available today in market and how will that roll out? >> Yeah, so in market today, you can manage your subscriptions, you get your billing, you know, and your visibility into your billing and then over the next couple of months, we will be bringing out the buy experience and I think it's on Compute Ops Manager. So, that was announced for the compute, you know, to manage or compute from the cloud. >> Antonio and his keynote said, you know, customers ask me all the time, "which workload should go in on-prem and which should go in the public cloud?" And when I heard that, I said, yeah, I get that question all the time. And he said, "but that's the wrong question." I'm like, ah, but I want the answer to that, which should go where? >> Well, I mean, it is really a hard question to answer. And so, you know, I think you have to look at your workloads and you have to think about, are they latency sensitive? Okay, do they have high data gravity? Okay, and do they have different requirements, for instance, like, you may have a requirement that you want a very particular type of AI and ML that you can only get from a specific public cloud and then that's the right place to put it. So there's a whole slew of attributes that you have to look at to put it, you know, to put the workload in the right place. And what I would say is, I think like five years ago, six years ago, we all thought that every workload was going to the public cloud and now here we are and we have workloads staying in the data center, they may be moving to a colo, you know, also security is another key attribute, compliance, what are my compliance? You know, for highly-compliant industries, taking workloads and putting them on the public cloud may work but many times it's too much of a compliance risk for people to figure out what to do. Data sovereignty is also another area that, you know, now we're starting to see in Europe, you know, data can't leave the country. So, there are lots and lots of attributes and I think workloads are going to exist everywhere. >> You didn't say predictability which used to be the default for on-prem, so, okay, we're making progress here and so now I want to ask you, you mentioned like, it may be some ML tool that you can only get in the cloud, is your strategy to close that gap over time or is it to maybe stay more focused? >> So, we believe that, you know, we serve our customers best by being focused, right? And so, we are, you know, we have innovations going on at the edge and I see you just talk to Phil and so, you know, our customers have compute needs at the edge, cloud needs at the edge, at the data center, and then in the areas where it makes sense, like our backup and recovery space to be hybrid where you can deploy the same backup and recovery service on-prem and in the public cloud, then that's where we will interoperate with the public cloud. But we're being very focused about where we value. >> Talk about security posture, how you guys look at that holistically, and then, maybe specifically in, you know, cloud, core, edge 'cause it's all cloud operations at this point, DevOps and now network programmability, what's the security posture, zero trust or trust? Trust and verify, zero trust, what's the view? >> Yeah so, leading with the zero trust approach, starting all the way from the hardware Silicon root of trust SPIFFE and SPIRE for the workloads and going up the stack, even including the network security as well. So this has to be viewed in a holistic fashion, security is always like that, you know, and that's exactly what we are doing on the platform. >> So zero trust more at the lowering the stack that's no perimeter there, so it's perimeters gone, you got to manage that, and then as you get software, shifting left as they call it, that's more trust-specific, trust and verify, is that what you're saying? >> Correct. >> Okay. >> Latha, maybe you could give us a little taste of the roadmap, when you talk to customers, what are some of the big challenges that they're throwing at you and what can we expect in the future from the platform? >> Yeah, so from the challenges point of view, it is ability to run workloads wherever they want, whenever they want and having that capacity available in a, you know auto scale fashion, this is what they're looking for and that's exactly what we are addressing on the platform. We have the infrastructure which is available as a Infrastructure-as-a-Service, we are bringing SaaS modules on top of it, all of this is combined on the platform, right. >> Is your strategy going forward, Fidelma, to leverage the hyperscale APIs and primitives specifically by building a substrate on top of those? Or is it really to let them handle that and you build the substrate for your part that's on-prem maybe the hybrid and out to the edge? >> So I think it's a combination of both. It's kind of where it makes sense, you know, if you look at the offering for HCI, the GreenLake for HCI that like shows your VMs on-prem, but it'll also show you your VMs in Amazon, so leveraging their API, so that's where we build a substrate that goes across, I don't believe in a cloaking mechanism, it's never made sense in this world because you always end up degenerating down to, you know, like the smallest set of things so it's a combination, it's APIs integration where it makes sense where customers want to have a common experience on-prem and in the cloud and then it's, you know, really focusing for us on the edge, the data center and the cloud. >> I got to follow up on the cloaking mechanism, isn't VMware a cloaking mechanism? Is Kubernetes a cloaking mechanism? >> No, that's orchestration. >> Well, I think in terms that, you know, we've had many efforts in this industry for, I'm going to build a manager of managers, you know, the paint of glass that's going to cover the world and that has never worked. You know, so and VMware and Kubernetes are way more than that. >> Good answer, that's a safe answer. Final question as we wrap up, what is the value promises that you guys talk to customers about when you see customers saying, we're building this platform here we got today here's the roadmap, here's our promise, here's what we're trying to do, what's that message? >> So the message is really, you know, we're focused on, you know, where people want to run their workloads and you know, traditionally, we've always come to market with you know, they're great in their silos but they don't make it easy for customers to, you know, to consume, to get support, to even think that they come from the same company. So first of all is, let's bring them all together, let's make sure that when you look at HP and you use HP that, you know, it's a cloud experience and that you don't kind of feel the seams between the organizations and on that, you know, it's rapid engagement with the customer to get their feedback. And so that's what the platform is all about, making that journey for the customers smooth and easy, and then, you know, and then delivering the offerings that make sense where we can differentiate ourselves and add value and that's kind of what we talked-- >> And of course ecosystem, if it works, the ecosystem's thriving, that's a big kind of scoreboard feature. >> Exactly, and the partners are front and center, you know, we can't deliver the value without them and so being able to access those through the GreenLake portal is also, you know, a huge value to everybody, because again, you're not trying to combine all of these different pieces from different parts of the organization and the ecosystem. >> Guys, I want to thank you for coming on theCUBE, Fidelma, I was really excited when I saw that you took the job as CTO, you're somebody I've known for a long time and watched your career, you got product chops, Latha, it's great to see you in this it's great to see women in products and technical roles, I love it, and so, good job, good job HPE. well, hey. >> We didn't get the secrets out of you, the one I hear that's on the roadmap and the all the secret sauce, we'll get you back. >> You'll see us. >> Thanks again. >> Thank you. >> Thank you. >> For John furrier, our guest, and this is Dave Vellante at theCUBE's coverage of HPE 2022 Discover, we'll be right back right after this short break. (gentle music)
SUMMARY :
brought to you by HPE. the Chief Platform Officer how you see the role. all the way through, you know, and so, the partners, you in this effort or maybe you operate it and you know, maintain it, that you guys think about back to when you have and integration, you mentioned API first, and it's like, now you I want to ask you guys , it's Cloud Console. so you just see what and they're, you know, you know, to manage or Antonio and his keynote said, you know, at to put it, you know, and so, you know, our customers have security is always like that, you know, in a, you know auto scale fashion, and then it's, you know, really focusing of managers, you know, what is the value promises that you guys So the message is really, you know, the ecosystem's thriving, you know, we can't deliver Latha, it's great to see you in this and the all the secret and this is Dave Vellante
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Antonio | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Latha Vishnubhotla | PERSON | 0.99+ |
Latha | PERSON | 0.99+ |
Phil | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Fidelma Russo | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Fidelma | PERSON | 0.99+ |
John furrier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
first purchase | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
first purchases | QUANTITY | 0.99+ |
six years ago | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Cloud 3.0 | TITLE | 0.98+ |
first one | QUANTITY | 0.97+ |
first principles | QUANTITY | 0.96+ |
day one | QUANTITY | 0.96+ |
hundreds of locations | QUANTITY | 0.95+ |
four key components | QUANTITY | 0.94+ |
first approach | QUANTITY | 0.92+ |
First | QUANTITY | 0.92+ |
four pillars | QUANTITY | 0.91+ |
2.0 | TITLE | 0.9+ |
day two | QUANTITY | 0.9+ |
GreenLake | TITLE | 0.89+ |
Kubernetes | TITLE | 0.88+ |
day zero | QUANTITY | 0.87+ |
VMware | ORGANIZATION | 0.85+ |
first organization | QUANTITY | 0.81+ |
zero trust | QUANTITY | 0.79+ |
Chief Platform Officer | PERSON | 0.79+ |
zero trust | QUANTITY | 0.78+ |
first | QUANTITY | 0.73+ |
Fidelma | ORGANIZATION | 0.72+ |
zero | QUANTITY | 0.71+ |
day two | QUANTITY | 0.64+ |
single common | QUANTITY | 0.64+ |
HPE Discover 2022 | EVENT | 0.63+ |
2022 | TITLE | 0.63+ |
HCI | OTHER | 0.6+ |
GreenLake | ORGANIZATION | 0.56+ |
SPIRE | ORGANIZATION | 0.54+ |
Discover | EVENT | 0.53+ |
next couple | DATE | 0.52+ |
SPIFFE | ORGANIZATION | 0.52+ |
HPE 2022 | TITLE | 0.51+ |
HPE | TITLE | 0.51+ |
months | DATE | 0.5+ |
SPIRE | TITLE | 0.46+ |
SPIFFE | TITLE | 0.46+ |
2022 | DATE | 0.32+ |
Kirsten Newcomer, Red Hat | Managing Risk In The Digital Supply Chain
(upbeat music) >> Hello everyone, my name is Dave Vellante, and we're digging into the many facets of the software supply chain and how to better manage digital risk. I'd like to introduce Kirsten Newcomer, who is the Director of Cloud and DevSecOps Strategy at Red Hat. Hello Kirsten, welcome. >> Hello Dave, great to be here with you today. >> Let's dive right in. What technologies and practices should we be thinking about that can help improve the security posture within the software supply chain? >> So I think the most important thing for folks to think about really is adopting DevSecOps. And while organizations talk about DevSecOps, and many folks have adopted DevOps, they tend to forget the security part of DevSecOps. And so for me, DevSecOps is both DevSec, how do I shift security left into my supply chain, and SecOps which is a better understood and more common piece of the puzzle, but then closing that loop between what issues are discovered in production and feeding that back to the development team to ensure that we're really addressing that supply chain. >> Yeah I heard a stat. I don't know what the source is, I don't know if it's true, but it probably is that around 50% of the organizations in North America, don't even have a SecOps team. Now of course that probably includes a lot of smaller organizations, but the SecOps team, they're not doing DevSecOps, but so what are organizations doing for supply chain security today? >> Yeah, I think the most common practice, that people have adopted is vulnerability scanning. And so they will do that as part of their development process. They might do it at one particular point, they might do it at more than one point. But one of the challenges that, we see first of all, is that, that's the only security gate that they've integrated into their supply chain, into their pipeline. So they may be scanning code that they get externally, they may be scanning their own code. But the second challenge is that the results take so much work to triage. This is static vulnerability scanning. You get information that is not in full context, because you don't know whether a vulnerability is truly exploitable, unless you know how exposed that particular part of the code is to the internet, for example, or to other aspects. And so it's just a real challenge for organizations, who are only looking at static vulnerability data, to figure out what the right steps to take are to manage those. And there's no way we're going to wind up with zero vulnerabilities, in the code that we're all working with today. Things just move too quickly. >> Is that idea of vulnerability scanning, is it almost like sampling where you may or may not find the weakest link? >> I would say that it's more comprehensive than that. The vulnerability scanners that are available, are generally pretty strong, but they are, again, if it's a static environment, a lot of them rely on NVD database, which typically it's going to give you the worst case scenario, and by nature can't account for things like, was the software that you're scanning built with controls, mitigations built in. It's just going to tell you, this is the package, and this is the known vulnerabilities associated with that package. It's not going to tell you whether there were compiler time flags, that may be mitigated that vulnerability. And so it's almost overwhelming for organizations, to prioritize that information, and really understand it in context. And so when I think about the closed loop feedback, you really want not just that static scan, but also analysis that takes into account, the configuration of the application, and the runtime environment and any mitigations that might be present there. >> I see, thank you for that. So, given that this digital risk and software supply chains are now front and center, we read about them all the time now, how do you think organizations are responding? What's the future of software supply chain going to look like? >> That's a great one. So I think organizations are scrambling. We've certainly at Red Hat, We've seen an increase in questions, about Red Hat's own supply chain security, and we've got lots of information that we can share and make available. But I think also we're starting to see, this strong increased interest, in security bill of materials. So I actually started working with, automation and standards around security bill of materials, a number of years ago. I participated in The Linux Foundation, SPDX project. There are other projects like CycloneDX. But I think all organizations are going to need to, those of us who deliver software, we're going to need to provide S-bombs and consumers of our software should be looking for S-bombs, to help them understand, to build transparency across the projects. And to facilitate that automation, you can leverage the data, in a software package list, to get a quick view of vulnerabilities. Again, you don't have that runtime context yet, but it saves you that step, perhaps of having to do the initial scanning. And then there are additional things that folks are looking at. Attested pipelines is going to be key, for building your custom software. As you pull the code in and your developers build their solutions, their applications, being able to vet the steps in your pipeline, and attest that nothing has happened in that pipeline, is really going to be key. >> So the software bill of materials is going to give you, a granular picture of your software, and then what the chain of, providence if you will or? >> Well, an S-bomb depending on the format, an S-bomb absolutely can provide a chain of providence. But another thing when we think about it, from the security angles, so there's the providence, where did this come from? Who provided it to me? But also with that bill of materials, that list of packages, you can leverage tooling, that will give you information about vulnerability information about those packages. At Red Hat we don't think that vulnerability info should be included in the S-bomb, because vulnerability data changes everyday. But, it saves you a step potentially. Then you don't necessarily have to be so concerned about doing the scan, you can pull data about known vulnerabilities for those packages without a scan. Similarly the attestation in the pipeline, that's about things like ensuring that, the code that you pull into your pipeline is signed. Signatures are in many ways of more important piece for defining providence and getting trust. >> Got it. So I was talking to Asiso the other day, and was asking her okay, what are your main challenges, kind of the standard analyst questions, if you will. She said look, I got great people, but I just don't have enough depth of talent, to handle, the challenges I'm always sort of playing catch up. That leads one to the conclusion, okay, automation is potentially an answer to address that problem, but the same time, people have said to me, sometimes we put too much faith in automation. some say okay, hey Kirsten help me square the circle. I want to automate because I lack the talent, but it's not, it's not sufficient. What are your thoughts on automation? >> So I think in the world we're in today, especially with cloud native applications, you can't manage without automation, because things are moving too quickly. So I think the way that you assess whether automation is meeting your goals becomes critical. And so looking for external guidance, such as the NIST's Secure Software Development Framework, that can help. But again, when we come back, I think, look for an opinionated position from the vendors, from the folks you're working with, from your advisors, on what are the appropriate set of gates. And we've talked about vulnerability scanning, but analyzing the configed data for your apps it's just as important. And so I think we have to work together as an industry, to figure out what are the key security gates, how do we audit the automation, so that I can validate that automation and be comfortable, that it is actually meeting the needs. But I don't see how we move forward without automation. >> Excellent. Thank you. We were forced into digital, without a lot of thought. Some folks, it's a spectrum, some organizations are better shape than others, but many had to just dive right in without a lot of strategy. And now people have sat back and said, okay, let's be more planful, more thoughtful. So as you, and then of course, you've got, the supply chain hacks, et cetera. How do you think the whole narrative and the strategy is going to change? How should it change the way in which we create, maintain, consume softwares as both organizations and individuals? >> Yeah. So again, I think there's going to be, and there's already, need request for more transparency, from software vendors. This is a place where S-bombs play a role, but there's also a lot of conversation out there about zero trust. So what does that mean in, you have to have a relationship with your vendor, that provides transparency, so that you can assess the level of trust. You also have to, in your organization, determine to your point earlier about people with skills and automation. How do you trust, but verify? This is not just with your vendor, but also with your internal supply chain. So trust and verify remains key. That's been a concept that's been around for a while. Cloud native doesn't change that, but it may change the tools that we use. And we may also decide what are our trust boundaries. Are they where are we comfortable trusting? Where do we think that zero trust is more applicable place, a more applicable frame to apply? But I do think back to the automation piece, and again, it is hard for everybody to keep up. I think we have to break down silos, we have to ensure that teams are talking across those silos, so that we can leverage each other's skills. And we need to think about managing everything as code. What I like about the everything is code including security, is it does create auditability in new ways. If you're managing your infrastructure, and get Ops like approach your security policies, with a get Ops like approach, it provides visibility and auditability, and it enables your dev team to participate in new ways. >> So when you're talking about zero trust I think, okay, I can't trust users, I got to trust the verified users, machines, employees, my software, my partners. >> Yap >> Every possible connection point. >> Absolutely. And this is where both attestation and identity become key. So being able to, I mean, the SolarWinds team has done a really interesting set of things with their supply chain, after they were, in response to the hack they were dealing with. They're now using Tekton CD chains, to ensure that they have, attested every step in their supply chain process, and that they can replicate that with automation. So they're doing a combination of, yep. We've got humans who need to interact with the chain, and then we can validate every step in that chain. And then workload identity, is a key thing for us to think about too. So how do we assert identity for the workloads that are being deployed to the cloud and verify whether that's with SPIFFE SPIRE, or related projects verify, that the workload is the one that we meant to deploy and also runtime behavioral analysis. I know we've been talking about supply chain, but again, I think we have to do this closed loop. You can't just think about shifting security left. And I know you mentioned earlier, a lot of teams don't have SecOps, but there are solutions available, that help assess the behavior and runtime, and that information can be fed back to the app dev team, to help them adjust and verify and validate. Where do I need to tighten my security? >> Am glad you brought up the SolarWinds to Kirsten what they're doing. And as I remember after 911, everyone was afraid to fly, but it was probably the safest time in history to fly. And so same analogy here. SolarWinds probably has learned more about this and its reputation took a huge hit. But if you had to compare, what SolarWinds has learned and applied, at the speed at which they've done it with maybe, some other software suppliers, you might find that they've actually done a better job. It's just, unfortunately, that something hit that we never saw before. To me it was Stuxnet, like we'd never seen anything like this before, and then boom, we've entered a whole new era. I'll give you the last word Kirsten. >> No just to agree with you. And I think, again, as an industry, it's pushed us all to think harder and more carefully about where do we need to improve? What tools do we need to build to help ourselves? Again, S-bombs have been around, for a good 10 years or so, but they are enjoying a resurgence of importance signing, image signing, manifest signing. That's been around for ages, but we haven't made it easy to integrate that into the supply chain, and that's work that's happening today. Similarly that attestation of a supply chain, of a pipeline that's happening. So I think as a industry, we've all recognized, that we need to step up, and there's a lot of creative energy going into improving in this space. >> Excellent Kirsten Newcomer, thanks so much for your perspectives. Excellent conversation. >> My pleasure, thanks so much. >> You're welcome. And you're watching theCUBE, the leader in tech coverage. (soft music)
SUMMARY :
and how to better manage digital risk. Hello Dave, great to that can help improve the security posture and more common piece of the puzzle, that around 50% of the that particular part of the code It's not going to tell you going to look like? And to facilitate that automation, the code that you pull into but the same time, people have said to me, that it is actually meeting the needs. and the strategy is going to change? But I do think back to the to trust the verified users, that the workload is the to Kirsten what they're doing. No just to agree with you. thanks so much for your perspectives. the leader in tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kirsten | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Kirsten Newcomer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
SolarWinds | ORGANIZATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Tekton | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
DevSecOps | TITLE | 0.99+ |
Kir | PERSON | 0.99+ |
more than one point | QUANTITY | 0.98+ |
around 50% | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
sten Newcomer | PERSON | 0.97+ |
Stuxnet | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
DevSec | TITLE | 0.95+ |
Secure Software Development Framework | TITLE | 0.93+ |
SecOps | TITLE | 0.9+ |
point | QUANTITY | 0.89+ |
zero vulnerabilities | QUANTITY | 0.88+ |
zero trust | QUANTITY | 0.87+ |
Asiso | ORGANIZATION | 0.85+ |
of years ago | DATE | 0.73+ |
911 | OTHER | 0.7+ |
DevOps | TITLE | 0.67+ |
CycloneDX | TITLE | 0.66+ |
Ops | ORGANIZATION | 0.65+ |
SPIFFE SPIRE | TITLE | 0.65+ |
DevSecOps | ORGANIZATION | 0.63+ |
theCUBE | ORGANIZATION | 0.61+ |
SPDX | TITLE | 0.41+ |
Linux | ORGANIZATION | 0.21+ |
Kirsten Newcomer, Red Hat V2
(upbeat music) >> Hello everyone, my name is Dave Vellante, and we're digging into the many facets of the software supply chain and how to better manage digital risk. I'd like to introduce Kirsten Newcomer, who is the Director of Cloud and DevSecOps Strategy at Red Hat. Hello Kirsten, welcome. >> Hello Dave, great to be here with you today. >> Let's dive right in. What technologies and practices should we be thinking about that can help improve the security posture within the software supply chain? >> So I think the most important thing for folks to think about really is adopting DevSecOps. And while organizations talk about DevSecOps, and many folks have adopted DevOps, they tend to forget the security part of DevSecOps. And so for me, DevSecOps is both DevSec, how do I shift security left into my supply chain, and SecOps which is a better understood and more common piece of the puzzle, but then closing that loop between what issues are discovered in production and feeding that back to the development team to ensure that we're really addressing that supply chain. >> Yeah I heard a stat. I don't know what the source is, I don't know if it's true, but it probably is that around 50% of the organizations in North America, don't even have a SecOps team. Now of course that probably includes a lot of smaller organizations, but the SecOps team, they're not doing DevSecOps, but so what are organizations doing for supply chain security today? >> Yeah, I think the most common practice, that people have adopted is vulnerability scanning. And so they will do that as part of their development process. They might do it at one particular point, they might do it at more than one point. But one of the challenges that, we see first of all, is that, that's the only security gate that they've integrated into their supply chain, into their pipeline. So they may be scanning code that they get externally, they may be scanning their own code. But the second challenge is that the results take so much work to triage. This is static vulnerability scanning. You get information that is not in full context, because you don't know whether a vulnerability is truly exploitable, unless you know how exposed that particular part of the code is to the internet, for example, or to other aspects. And so it's just a real challenge for organizations, who are only looking at static vulnerability data, to figure out what the right steps to take are to manage those. And there's no way we're going to wind up with zero vulnerabilities, in the code that we're all working with today. Things just move too quickly. >> Is that idea of vulnerability scanning, is it almost like sampling where you may or may not find the weakest link? >> I would say that it's more comprehensive than that. The vulnerability scanners that are available, are generally pretty strong, but they are, again, if it's a static environment, a lot of them rely on NVD database, which typically it's going to give you the worst case scenario, and by nature can't account for things like, was the software that you're scanning built with controls, mitigations built in. It's just going to tell you, this is the package, and this is the known vulnerabilities associated with that package. It's not going to tell you whether there were compiler time flags, that may be mitigated that vulnerability. And so it's almost overwhelming for organizations, to prioritize that information, and really understand it in context. And so when I think about the closed loop feedback, you really want not just that static scan, but also analysis that takes into account, the configuration of the application, and the runtime environment and any mitigations that might be present there. >> I see, thank you for that. So, given that this digital risk and software supply chains are now front and center, we read about them all the time now, how do you think organizations are responding? What's the future of software supply chain going to look like? >> That's a great one. So I think organizations are scrambling. We've certainly at Red Hat, We've seen an increase in questions, about Red Hat's own supply chain security, and we've got lots of information that we can share and make available. But I think also we're starting to see, this strong increased interest, in security bill of materials. So I actually started working with, automation and standards around security bill of materials, a number of years ago. I participated in The Linux Foundation, SPDX project. There are other projects like CycloneDX. But I think all organizations are going to need to, those of us who deliver software, we're going to need to provide S-bombs and consumers of our software should be looking for S-bombs, to help them understand, to build transparency across the projects. And to facilitate that automation, you can leverage the data, in a software package list, to get a quick view of vulnerabilities. Again, you don't have that runtime context yet, but it saves you that step, perhaps of having to do the initial scanning. And then there are additional things that folks are looking at. Attested pipelines is going to be key, for building your custom software. As you pull the code in and your developers build their solutions, their applications, being able to vet the steps in your pipeline, and attest that nothing has happened in that pipeline, is really going to be key. >> So the software bill of materials is going to give you, a granular picture of your software, and then what the chain of, providence if you will or? >> Well, an S-bomb depending on the format, an S-bomb absolutely can provide a chain of providence. But another thing when we think about it, from the security angles, so there's the providence, where did this come from? Who provided it to me? But also with that bill of materials, that list of packages, you can leverage tooling, that will give you information about vulnerability information about those packages. At Red Hat we don't think that vulnerability info should be included in the S-bomb, because vulnerability data changes everyday. But, it saves you a step potentially. Then you don't necessarily have to be so concerned about doing the scan, you can pull data about known vulnerabilities for those packages without a scan. Similarly the attestation in the pipeline, that's about things like ensuring that, the code that you pull into your pipeline is signed. Signatures are in many ways of more important piece for defining providence and getting trust. >> Got it. So I was talking to Asiso the other day, and was asking her okay, what are your main challenges, kind of the standard analyst questions, if you will. She said look, I got great people, but I just don't have enough depth of talent, to handle, the challenges I'm always sort of playing catch up. That leads one to the conclusion, okay, automation is potentially an answer to address that problem, but the same time, people have said to me, sometimes we put too much faith in automation. some say okay, hey Kirsten help me square the circle. I want to automate because I lack the talent, but it's not, it's not sufficient. What are your thoughts on automation? >> So I think in the world we're in today, especially with cloud native applications, you can't manage without automation, because things are moving too quickly. So I think the way that you assess whether automation is meeting your goals becomes critical. And so looking for external guidance, such as the NIST's Secure Software Development Framework, that can help. But again, when we come back, I think, look for an opinionated position from the vendors, from the folks you're working with, from your advisors, on what are the appropriate set of gates. And we've talked about vulnerability scanning, but analyzing the configed data for your apps it's just as important. And so I think we have to work together as an industry, to figure out what are the key security gates, how do we audit the automation, so that I can validate that automation and be comfortable, that it is actually meeting the needs. But I don't see how we move forward without automation. >> Excellent. Thank you. We were forced into digital, without a lot of thought. Some folks, it's a spectrum, some organizations are better shape than others, but many had to just dive right in without a lot of strategy. And now people have sat back and said, okay, let's be more planful, more thoughtful. So as you, and then of course, you've got, the supply chain hacks, et cetera. How do you think the whole narrative and the strategy is going to change? How should it change the way in which we create, maintain, consume softwares as both organizations and individuals? >> Yeah. So again, I think there's going to be, and there's already, need request for more transparency, from software vendors. This is a place where S-bombs play a role, but there's also a lot of conversation out there about zero trust. So what does that mean in, you have to have a relationship with your vendor, that provides transparency, so that you can assess the level of trust. You also have to, in your organization, determine to your point earlier about people with skills and automation. How do you trust, but verify? This is not just with your vendor, but also with your internal supply chain. So trust and verify remains key. That's been a concept that's been around for a while. Cloud native doesn't change that, but it may change the tools that we use. And we may also decide what are our trust boundaries. Are they where are we comfortable trusting? Where do we think that zero trust is more applicable place, a more applicable frame to apply? But I do think back to the automation piece, and again, it is hard for everybody to keep up. I think we have to break down silos, we have to ensure that teams are talking across those silos, so that we can leverage each other's skills. And we need to think about managing everything as code. What I like about the everything is code including security, is it does create auditability in new ways. If you're managing your infrastructure, and get Ops like approach your security policies, with a get Ops like approach, it provides visibility and auditability, and it enables your dev team to participate in new ways. >> So when you're talking about zero trust I think, okay, I can't trust users, I got to trust the verified users, machines, employees, my software, my partners. >> Yap >> Every possible connection point. >> Absolutely. And this is where both attestation and identity become key. So being able to, I mean, the SolarWinds team has done a really interesting set of things with their supply chain, after they were, in response to the hack they were dealing with. They're now using Tekton CD chains, to ensure that they have, attested every step in their supply chain process, and that they can replicate that with automation. So they're doing a combination of, yep. We've got humans who need to interact with the chain, and then we can validate every step in that chain. And then workload identity, is a key thing for us to think about too. So how do we assert identity for the workloads that are being deployed to the cloud and verify whether that's with SPIFFE SPIRE, or related projects verify, that the workload is the one that we meant to deploy and also runtime behavioral analysis. I know we've been talking about supply chain, but again, I think we have to do this closed loop. You can't just think about shifting security left. And I know you mentioned earlier, a lot of teams don't have SecOps, but there are solutions available, that help assess the behavior and runtime, and that information can be fed back to the app dev team, to help them adjust and verify and validate. Where do I need to tighten my security? >> Am glad you brought up the SolarWinds to Kirsten what they're doing. And as I remember after 911, everyone was afraid to fly, but it was probably the safest time in history to fly. And so same analogy here. SolarWinds probably has learned more about this and its reputation took a huge hit. But if you had to compare, what SolarWinds has learned and applied, at the speed at which they've done it with maybe, some other software suppliers, you might find that they've actually done a better job. It's just, unfortunately, that something hit that we never saw before. To me it was Stuxnet, like we'd never seen anything like this before, and then boom, we've entered a whole new era. I'll give you the last word Kirsten. >> No just to agree with you. And I think, again, as an industry, it's pushed us all to think harder and more carefully about where do we need to improve? What tools do we need to build to help ourselves? Again, S-bombs have been around, for a good 10 years or so, but they are enjoying a resurgence of importance signing, image signing, manifest signing. That's been around for ages, but we haven't made it easy to integrate that into the supply chain, and that's work that's happening today. Similarly that attestation of a supply chain, of a pipeline that's happening. So I think as a industry, we've all recognized, that we need to step up, and there's a lot of creative energy going into improving in this space. >> Excellent Kirsten Newcomer, thanks so much for your perspectives. Excellent conversation. >> My pleasure, thanks so much. >> You're welcome. And you're watching theCUBE, the leader in tech coverage. (soft music)
SUMMARY :
and how to better manage digital risk. Hello Dave, great to that can help improve the security posture and more common piece of the puzzle, that around 50% of the that particular part of the code It's not going to tell you going to look like? And to facilitate that automation, the code that you pull into but the same time, people have said to me, that it is actually meeting the needs. and the strategy is going to change? But I do think back to the to trust the verified users, that the workload is the to Kirsten what they're doing. No just to agree with you. thanks so much for your perspectives. the leader in tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kirsten | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Kirsten Newcomer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
SolarWinds | ORGANIZATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Tekton | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
DevSecOps | TITLE | 0.99+ |
Kir | PERSON | 0.99+ |
more than one point | QUANTITY | 0.98+ |
around 50% | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Stuxnet | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
DevSec | TITLE | 0.95+ |
Secure Software Development Framework | TITLE | 0.93+ |
SecOps | TITLE | 0.9+ |
point | QUANTITY | 0.89+ |
zero vulnerabilities | QUANTITY | 0.88+ |
zero trust | QUANTITY | 0.87+ |
Asiso | ORGANIZATION | 0.85+ |
sten Newcomer | PERSON | 0.82+ |
of years ago | DATE | 0.73+ |
911 | OTHER | 0.7+ |
DevOps | TITLE | 0.67+ |
CycloneDX | TITLE | 0.66+ |
Ops | ORGANIZATION | 0.65+ |
SPIFFE SPIRE | TITLE | 0.65+ |
DevSecOps | ORGANIZATION | 0.63+ |
theCUBE | ORGANIZATION | 0.61+ |
SPDX | TITLE | 0.41+ |
Linux | ORGANIZATION | 0.21+ |
Kirsten Newcomer, Red Hat
(upbeat music) >> Hello everyone, my name is Dave Vellante, and we're digging into the many facets of the software supply chain and how to better manage digital risk. I'd like to introduce Kirsten Newcomer, who is the Director of Cloud and DevSecOps Strategy at Red Hat. Hello Kirsten, welcome. >> Hello Dave, great to be here with you today. >> Let's dive right in. What technologies and practices should we be thinking about that can help improve the security posture within the software supply chain? >> So I think the most important thing for folks to think about really is adopting DevSecOps. And while organizations talk about DevSecOps, and many folks have adopted DevOps, they tend to forget the security part of DevSecOps. And so for me, DevSecOps is both DevSec, how do I shift security left into my supply chain, and SecOps which is a better understood and more common piece of the puzzle, but then closing that loop between what issues are discovered in production and feeding that back to the development team to ensure that we're really addressing that supply chain. >> Yeah I heard a stat. I don't know what the source is, I don't know if it's true, but it probably is that around 50% of the organizations in North America, don't even have a SecOps team. Now of course that probably includes a lot of smaller organizations, but the SecOps team, they're not doing DevSecOps, but so what are organizations doing for supply chain security today? >> Yeah, I think the most common practice, that people have adopted is vulnerability scanning. And so they will do that as part of their development process. They might do it at one particular point, they might do it at more than one point. But one of the challenges that, we see first of all, is that, that's the only security gate that they've integrated into their supply chain, into their pipeline. So they may be scanning code that they get externally, they may be scanning their own code. But the second challenge is that the results take so much work to triage. This is static vulnerability scanning. You get information that is not in full context, because you don't know whether a vulnerability is truly exploitable, unless you know how exposed that particular part of the code is to the internet, for example, or to other aspects. And so it's just a real challenge for organizations, who are only looking at static vulnerability data, to figure out what the right steps to take are to manage those. And there's no way we're going to wind up with zero vulnerabilities, in the code that we're all working with today. Things just move too quickly. >> Is that idea of vulnerability scanning, is it almost like sampling where you may or may not find the weakest link? >> I would say that it's more comprehensive than that. The vulnerability scanners that are available, are generally pretty strong, but they are, again, if it's a static environment, a lot of them rely on NVD database, which typically it's going to give you the worst case scenario, and by nature can't account for things like, was the software that you're scanning built with controls, mitigations built in. It's just going to tell you, this is the package, and this is the known vulnerabilities associated with that package. It's not going to tell you whether there were compiler time flags, that may be mitigated that vulnerability. And so it's almost overwhelming for organizations, to prioritize that information, and really understand it in context. And so when I think about the closed loop feedback, you really want not just that static scan, but also analysis that takes into account, the configuration of the application, and the runtime environment and any mitigations that might be present there. >> I see, thank you for that. So, given that this digital risk and software supply chains are now front and center, we read about them all the time now, how do you think organizations are responding? What's the future of software supply chain going to look like? >> That's a great one. So I think organizations are scrambling. We've certainly at Red Hat, We've seen an increase in questions, about Red Hat's own supply chain security, and we've got lots of information that we can share and make available. But I think also we're starting to see, this strong increased interest, in security bill of materials. So I actually started working with, automation and standards around security bill of materials, a number of years ago. I participated in The Linux Foundation, SPDX project. There are other projects like CycloneDX. But I think all organizations are going to need to, those of us who deliver software, we're going to need to provide S-bombs and consumers of our software should be looking for S-bombs, to help them understand, to build transparency across the projects. And to facilitate that automation, you can leverage the data, in a software package list, to get a quick view of vulnerabilities. Again, you don't have that runtime context yet, but it saves you that step, perhaps of having to do the initial scanning. And then there are additional things that folks are looking at. Attested pipelines is going to be key, for building your custom software. As you pull the code in and your developers build their solutions, their applications, being able to vet the steps in your pipeline, and attest that nothing has happened in that pipeline, is really going to be key. >> So the software bill of materials is going to give you, a granular picture of your software, and then what the chain of, providence if you will or? >> Well, an S-bomb depending on the format, an S-bomb absolutely can provide a chain of providence. But another thing when we think about it, from the security angles, so there's the providence, where did this come from? Who provided it to me? But also with that bill of materials, that list of packages, you can leverage tooling, that will give you information about vulnerability information about those packages. At Red Hat we don't think that vulnerability info should be included in the S-bomb, because vulnerability data changes everyday. But, it saves you a step potentially. Then you don't necessarily have to be so concerned about doing the scan, you can pull data about known vulnerabilities for those packages without a scan. Similarly the attestation in the pipeline, that's about things like ensuring that, the code that you pull into your pipeline is signed. Signatures are in many ways of more important piece for defining providence and getting trust. >> Got it. So I was talking to Asiso the other day, and was asking her okay, what are your main challenges, kind of the standard analyst questions, if you will. She said look, I got great people, but I just don't have enough depth of talent, to handle, the challenges I'm always sort of playing catch up. That leads one to the conclusion, okay, automation is potentially an answer to address that problem, but the same time, people have said to me, sometimes we put too much faith in automation. some say okay, hey Kirsten help me square the circle. I want to automate because I lack the talent, but it's not, it's not sufficient. What are your thoughts on automation? >> So I think in the world we're in today, especially with cloud native applications, you can't manage without automation, because things are moving too quickly. So I think the way that you assess whether automation is meeting your goals becomes critical. And so looking for external guidance, such as the NIST's Secure Software Development Framework, that can help. But again, when we come back, I think, look for an opinionated position from the vendors, from the folks you're working with, from your advisors, on what are the appropriate set of gates. And we've talked about vulnerability scanning, but analyzing the configed data for your apps it's just as important. And so I think we have to work together as an industry, to figure out what are the key security gates, how do we audit the automation, so that I can validate that automation and be comfortable, that it is actually meeting the needs. But I don't see how we move forward without automation. >> Excellent. Thank you. We were forced into digital, without a lot of thought. Some folks, it's a spectrum, some organizations are better shape than others, but many had to just dive right in without a lot of strategy. And now people have sat back and said, okay, let's be more planful, more thoughtful. So as you, and then of course, you've got, the supply chain hacks, et cetera. How do you think the whole narrative and the strategy is going to change? How should it change the way in which we create, maintain, consume softwares as both organizations and individuals? >> Yeah. So again, I think there's going to be, and there's already, need request for more transparency, from software vendors. This is a place where S-bombs play a role, but there's also a lot of conversation out there about zero trust. So what does that mean in, you have to have a relationship with your vendor, that provides transparency, so that you can assess the level of trust. You also have to, in your organization, determine to your point earlier about people with skills and automation. How do you trust, but verify? This is not just with your vendor, but also with your internal supply chain. So trust and verify remains key. That's been a concept that's been around for a while. Cloud native doesn't change that, but it may change the tools that we use. And we may also decide what are our trust boundaries. Are they where are we comfortable trusting? Where do we think that zero trust is more applicable place, a more applicable frame to apply? But I do think back to the automation piece, and again, it is hard for everybody to keep up. I think we have to break down silos, we have to ensure that teams are talking across those silos, so that we can leverage each other's skills. And we need to think about managing everything as code. What I like about the everything is code including security, is it does create auditability in new ways. If you're managing your infrastructure, and get Ops like approach your security policies, with a get Ops like approach, it provides visibility and auditability, and it enables your dev team to participate in new ways. >> So when you're talking about zero trust I think, okay, I can't trust users, I got to trust the verified users, machines, employees, my software, my partners. >> Yap >> Every possible connection point. >> Absolutely. And this is where both attestation and identity become key. So being able to, I mean, the SolarWinds team has done a really interesting set of things with their supply chain, after they were, in response to the hack they were dealing with. They're now using Tekton CD chains, to ensure that they have, attested every step in their supply chain process, and that they can replicate that with automation. So they're doing a combination of, yep. We've got humans who need to interact with the chain, and then we can validate every step in that chain. And then workload identity, is a key thing for us to think about too. So how do we assert identity for the workloads that are being deployed to the cloud and verify whether that's with SPIFFE SPIRE, or related projects verify, that the workload is the one that we meant to deploy and also runtime behavioral analysis. I know we've been talking about supply chain, but again, I think we have to do this closed loop. You can't just think about shifting security left. And I know you mentioned earlier, a lot of teams don't have SecOps, but there are solutions available, that help assess the behavior and runtime, and that information can be fed back to the app dev team, to help them adjust and verify and validate. Where do I need to tighten my security? >> Am glad you brought up the SolarWinds to Kirsten what they're doing. And as I remember after 911, everyone was afraid to fly, but it was probably the safest time in history to fly. And so same analogy here. SolarWinds probably has learned more about this and its reputation took a huge hit. But if you had to compare, what SolarWinds has learned and applied, at the speed at which they've done it with maybe, some other software suppliers, you might find that they've actually done a better job. It's just, unfortunately, that something hit that we never saw before. To me it was Stuxnet, like we'd never seen anything like this before, and then boom, we've entered a whole new era. I'll give you the last word Kirsten. >> No just to agree with you. And I think, again, as an industry, it's pushed us all to think harder and more carefully about where do we need to improve? What tools do we need to build to help ourselves? Again, S-bombs have been around, for a good 10 years or so, but they are enjoying a resurgence of importance signing, image signing, manifest signing. That's been around for ages, but we haven't made it easy to integrate that into the supply chain, and that's work that's happening today. Similarly that attestation of a supply chain, of a pipeline that's happening. So I think as a industry, we've all recognized, that we need to step up, and there's a lot of creative energy going into improving in this space. >> Excellent Kirsten Newcomer, thanks so much for your perspectives. Excellent conversation. >> My pleasure, thanks so much. >> You're welcome. And you're watching theCUBE, the leader in tech coverage. (soft music)
SUMMARY :
and how to better manage digital risk. Hello Dave, great to that can help improve the security posture and more common piece of the puzzle, that around 50% of the that particular part of the code It's not going to tell you going to look like? And to facilitate that automation, the code that you pull into but the same time, people have said to me, that it is actually meeting the needs. and the strategy is going to change? But I do think back to the to trust the verified users, that the workload is the to Kirsten what they're doing. No just to agree with you. thanks so much for your perspectives. the leader in tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kirsten | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Kirsten Newcomer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
SolarWinds | ORGANIZATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Tekton | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
DevSecOps | TITLE | 0.99+ |
Kir | PERSON | 0.99+ |
more than one point | QUANTITY | 0.98+ |
around 50% | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Stuxnet | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
DevSec | TITLE | 0.95+ |
Secure Software Development Framework | TITLE | 0.93+ |
SecOps | TITLE | 0.9+ |
point | QUANTITY | 0.89+ |
zero vulnerabilities | QUANTITY | 0.88+ |
zero trust | QUANTITY | 0.87+ |
Asiso | ORGANIZATION | 0.85+ |
sten Newcomer | PERSON | 0.74+ |
of years ago | DATE | 0.73+ |
911 | OTHER | 0.7+ |
DevOps | TITLE | 0.67+ |
CycloneDX | TITLE | 0.66+ |
Ops | ORGANIZATION | 0.65+ |
SPIFFE SPIRE | TITLE | 0.65+ |
DevSecOps | ORGANIZATION | 0.63+ |
theCUBE | ORGANIZATION | 0.61+ |
SPDX | TITLE | 0.41+ |
Linux | ORGANIZATION | 0.21+ |
Matt Maccaux, HPE | HPE Discover 2021
(bright music) >> Data by its very nature is distributed and siloed, but most data architectures today are highly centralized. Organizations are increasingly challenged to organize and manage data, and turn that data into insights. This idea of a single monolithic platform for data, it's giving way to new thinking. Where a decentralized approach, with open cloud native principles and federated governance, will become an underpinning of digital transformations. Hi everybody. This is Dave Volante. Welcome back to HPE Discover 2021, the virtual version. You're watching theCube's continuous coverage of the event and we're here with Matt Maccaux, who's a field CTO for Ezmeral Software at HPE. We're going to talk about HPE software strategy, and Ezmeral and specifically how to take AI analytics to scale and ensure the productivity of data teams. Matt, welcome to theCube. Good to see you. >> Good to see you again, Dave. Thanks for having me today. >> You're welcome. So talk a little bit about your role as a CTO. Where do you spend your time? >> I spend about half of my time talking to customers and partners about where they are on their digital transformation journeys and where they struggle with this sort of last phase where we start talking about bringing those cloud principles and practices into the data world. How do I take those data warehouses, those data lakes, those distributed data systems, into the enterprise and deploy them in a cloud-like manner? Then the other half of my time is working with our product teams to feed that information back, so that we can continually innovate to the next generation of our software platform. >> So when I remember, I've been following HP and HPE, for a long, long time, theCube has documented, we go back to sort of when the company was breaking in two parts, and at the time a lot of people were saying, "Oh, HP is getting rid of their software business, they're getting out of software." I said, "No, no, no, hold on. They're really focusing", and the whole focus around hybrid cloud and now as a service, you've really retooling that business and sharpened your focus. So tell us more about Ezmeral, it's a cool name, but what exactly is Ezmeral software? >> I get this question all the time. So what is Ezmeral? Ezmeral is a software platform for modern data and analytics workloads, using open source software components. We came from some inorganic growth. We acquired a company called Cytec, that brought us a zero trust approach to doing security with containers. We bought BlueData who came to us with an orchestrator before Kubernetes even existed in mainstream. They were orchestrating workloads using containers for some of these more difficult workloads. Clustered applications, distributed applications like Hadoop. Then finally we acquired MapR, which gave us this scale out distributed file system and additional analytical capabilities. What we've done is we've taken those components and we've also gone out into the marketplace to see what open source projects exist to allow us to bring those cloud principles and practices to these types of workloads, so that we can take things like Hadoop, and Spark, and Presto, and deploy and orchestrate them using open source Kubernetes. Leveraging GPU's, while providing that zero trust approach to security, that's what Ezmeral is all about is taking those cloud practices and principles, but without locking you in. Again, using those open source components where they exist, and then committing and contributing back to the opensource community where those projects don't exist. >> You know, it's interesting, thank you for that history, and when I go back, I have been there since the early days of Big Data and Hadoop and so forth and MapR always had the best product, but they couldn't get it out. Back then it was like kumbaya, open source, and they had this kind of proprietary system but it worked and that's why it was the best product. So at the same time they participated in open source projects because everybody did, that's where the innovation is going. So you're making that really hard to use stuff easier to use with Kubernetes orchestration, and then obviously, I'm presuming with the open source chops, sort of leaning into the big trends that you're seeing in the marketplace. So my question is, what are those big trends that you're seeing when you speak to technology executives which is a big part of what you do? >> So the trends are, I think, are a couplefold, and it's funny about Hadoop, but I think the final nails in the coffin have been hammered in with the Hadoop space now. So that leading trend, of where organizations are going, we're seeing organizations wanting to go cloud first. But they really struggle with these data-intensive workloads. Do I have to store my data in every cloud? Am I going to pay egress in every cloud? Well, what if my data scientists are most comfortable in AWS, but my data analysts are more comfortable in Azure, how do I provide that multi-cloud experience for these data workloads? That's the number one question I get asked, and that's probably the biggest struggle for these chief data officers, chief digital officers, is how do I allow that innovation but maintaining control over my data compliance especially when we talk international standards, like GDPR, to restrict access to data, the ability to be forgotten, in these multinational organizations how do I sort of square all of those components? Then how do I do that in a way that just doesn't lock me into another appliance or software vendor stack? I want to be able to work within the confines of the ecosystem, use the tools that are out there, but allow my organization to innovate in a very structured compliant way. >> I mean, I love this conversation and you just, to me, you hit on the key word, which is organization. I want to talk about what some of the barriers are. And again, you heard my wrap up front. I really do think that we've created, not only from a technology standpoint, and yes the tooling is important, but so is the organization, and as you said an analyst might want to work in one environment, a data scientist might want to work in another environment. The data may be very distributed. You might have situations where they're supporting the line of business. The line of business is trying to build new products, and if I have to go through this monolithic centralized organization, that's a barrier for me. And so we're seeing that change, that I kind of alluded to it up front, but what do you see as the big barriers that are blocking this vision from becoming a reality? >> It very much is organization, Dave. The technology's actually no longer the inhibitor here. We have enough technology, enough choices out there that technology is no longer the issue. It's the organization's willingness to embrace some of those technologies and put just the right level of control around accessing that data. Because if you don't allow your data scientists and data analysts to innovate, they're going to do one of two things. They're either going to leave, and then you have a huge problem keeping up with your competitors, or they're going to do it anyway. And they're going to do it in a way that probably doesn't comply with the organizational standards. So the more progressive enterprises that I speak with have realized that they need to allow these various analytical users to choose the tools they want, to self provision those as they need to and get access to data in a secure and compliant way. And that means we need to bring the cloud to generally where the data is because it's a heck of a lot easier than trying to bring the data where the cloud is, while conforming to those data principles, and that's HPE's strategy. You've heard it from our CEO for years now. Everything needs to be delivered as a service. It's Ezmeral Software that enables that capability, such as self-service and secure data provisioning, et cetera. >> Again, I love this conversation because if you go back to the early days of Hadoop, that was what was profound about a Hadoop. Bring five megabytes of code to a petabyte of data, and it didn't happen. We shoved it all into a data lake and it became a data swamp. And that's okay, it's a one dot oh, you know, maybe in data as is like data warehouses, data hubs, data lakes, maybe this is now a four dot oh, but we're getting there. But open source, one thing's for sure, it continues to gain momentum, it's where the innovation is. I wonder if you could comment on your thoughts on the role that open-source software plays for large enterprises, maybe some of the hurdles that are there, whether they're legal or licensing, or just fears, how important is open source software today? >> I think the cloud native developments, following the 12 factor applications, microservices based, paved the way over the last decade to make using open source technology tools and libraries mainstream. We have to tip our hats to Red Hat, right? For allowing organizations to embrace something so core as an operating system within the enterprise. But what everyone realized is that it's support that's what has to come with that. So we can allow our data scientists to use open source libraries, packages, and notebooks, but are we going to allow those to run in production? So if the answer is no, well? Then if we can't get support, we're not going to allow that. So where HPE Ezmeral is taking the lead here is, again, embracing those open source capabilities, but, if we deploy it, we're going to support it. Or we're going to work with the organization that has the committers to support it. You call HPE, the same phone number you've been calling for years for tier one 24 by seven support, and we will support your Kubernetes, your Spark your Presto, your Hadoop ecosystem of components. We're that throat to choke and we'll provide, all the way up to break/fix support, for some of these components and packages, giving these large enterprises the confidence to move forward with open source, but knowing that they have a trusted partner in which to do so. >> And that's why we've seen such success with say, for instance, managed services in the cloud, versus throwing out all the animals in the zoo and say, okay, figure it out yourself. But then, of course, what we saw, which was kind of ironic, was people finally said, "Hey, we can do this in the cloud more easily." So that's where you're seeing a lot of data land. However, the definition of cloud or the notion of cloud is changing. No longer is it just this remote set of services, "Somewhere out there in the cloud", some data center somewhere, no, it's moving to on-prem, on-prem is creating hybrid connections. You're seeing co-location facilities very proximate to the cloud. We're talking now about the edge, the near edge, and the far edge, deeply embedded. So that whole notion of cloud is changing. But I want to ask you, there's still a big push to cloud, everybody has a cloud first mantra, how do you see HPE competing in this new landscape? >> I think collaborating is probably a better word, although you could certainly argue if we're just leasing or renting hardware, then it would be competition, but I think again... The workload is going to flow to where the data exists. So if the data's being generated at the edge and being pumped into the cloud, then cloud is prod. That's the production system. If the data is generated via on-premises systems, then that's where it's going to be executed. That's production, and so HPE's approach is very much co-exist. It's a co-exist model of, if you need to do DevTests in the cloud and bring it back on-premises, fine, or vice versa. The key here is not locking our customers and our prospective clients into any sort of proprietary stack, as we were talking about earlier, giving people the flexibility to move those workloads to where the data exists, that is going to allow us to continue to get share of wallet, mind share, continue to deploy those workloads. And yes, there's going to competition that comes along. Do you run this on a GCP or do you run it on a GreenLake on-premises? Sure, we'll have those conversations, but again, if we're using open source software as the foundation for that, then actually where you run it is less relevant. >> So there's a lot of choices out there, when it comes to containers generally and Kubernetes specifically, and you may have answered this, you get the zero trust component, you've got the orchestrator, you've got the scale-out piece, but I'm interested in hearing in your words why an enterprise would or should consider Ezmeral instead of alternatives to Kubernetes solutions? >> It's a fair question, and it comes up in almost every conversation. "Oh, we already do Kubernetes, we have a Kubernetes standard", and that's largely true in most of the enterprises I speak to. They're using one of the many on-premises distributions to their cloud distributions, and they're all fine. They're all fine for what they were built for. Ezmeral was generally built for something a little different. Yes, everybody can run microservices based applications, DevOps based workloads, but where Ezmeral is different is for those data intensive, in clustered applications. Those sorts of applications require a certain degree of network awareness, persistent storage, et cetera, which requires either a significant amount of intelligence. Either you have to write in Golang, or you have to write your own operators, or Ezmeral can be that easy button. We deploy those stateful applications, because we bring a persistent storage layer, that came from MapR. We're really good at deploying those stateful clustered applications, and, in fact, we've opened sourced that as a project, KubeDirector, that came from BlueData, and we're really good at securing these, using SPIFFE and SPIRE, to ensure that there's that zero trust approach, that came from Scytale, and we've wrapped all of that in Kubernetes. So now you can take the most difficult, gnarly complex data intensive applications in your enterprise and deploy them using open source. And if that means we have to co-exist with an existing Kubernetes distribution, that's fine. That's actually the most common scenario that I walk into is, I start asking about, "What about these other applications you haven't done yet?" The answer is usually, "We haven't gotten to them yet", or "We're thinking about it", and that's when we talk about the capabilities of Ezmeral and I usually get the response, "Oh. A, we didn't know you existed and B well, let's talk about how exactly you do that." So again, it's more of a co-exist model rather than a compete with model, Dave. >> Well, that makes sense. I mean, I think again, a lot of people, they go, "Oh yeah, Kubernetes, no big deal. It's everywhere." But you're talking about a solution, kind of taking a platform approach with capabilities. You got to protect the data. A lot of times, these microservices aren't so micro and things are happening really fast. You've got to be secure. You got to be protected. And like you said, you've got a single phone number. You know, people say one throat to choke. Somebody in the media the other day said, "No, no. Single hand to shake." It's more of a partnership. I think that's apropos for HPE, Matt, with your heritage. >> That one's better. >> So, you know, thinking about this whole, we've gone through the pre big data days and the big data was all the hot buzzword. People don't maybe necessarily use that term anymore, although the data is bigger and getting bigger, which is kind of ironic. Where do you see this whole space going? We've talked about that sort of trend toward breaking down the silos, decentralization, maybe these hyper specialized roles that we've created, maybe getting more embedded or aligned with the line of business. How do you see... It feels like the next 10 years are going to be different than the last 10 years. How do you see it, Matt? >> I completely agree. I think we are entering this next era, and I don't know if it's well-defined. I don't know if I would go out on an edge to say exactly what the trend is going to be. But as you said earlier, data lakes really turned into data swamps. We ended up with lots of them in the enterprise, and enterprises had to allow that to happen. They had to let each business unit or each group of users collect the data that they needed and IT sort of had to deal with that down the road. I think that the more progressive organizations are leading the way. They are, again, taking those lessons from cloud and application developments, microservices, and they're allowing a freedom of choice. They're allowing data to move, to where those applications are, and I think this decentralized approach is really going to be king. You're going to see traditional software packages. You're going to see open source. You're going to see a mix of those, but what I think will probably be common throughout all of that is there's going to be this sense of automation, this sense that, we can't just build an algorithm once, release it and then wish it luck. That we've got to treat these analytics, and these data systems, as living things. That there's life cycles that we have to support. Which means we need to have DevOps for our data science. We need a CI/CD for our data analytics. We need to provide engineering at scale, like we do for software engineering. That's going to require automation, and an organizational thinking process, to allow that to actually occur. I think all of those things. The sort of people, process, products. It's all three of those things that are going to have to come into play, but stealing those best ideas from cloud and application developments, I think we're going to end up with probably something new over the next decade or so. >> Again, I'm loving this conversation, so I'm going to stick with it for a sec. It's hard to predict, but some takeaways that I have, Matt, from our conversation, I wonder if you could comment? I think the future is more open source. You mentioned automation, Devs are going to be key. I think governance as code, security designed in at the point of code creation, is going to be critical. It's no longer going be a bolt on. I don't think we're going to throw away the data warehouse or the data hubs or the data lakes. I think they become a node. I like this idea, I don't know if you know Zhamak Dehghani? but she has this idea of a global data mesh where these tools, lakes, whatever, they're a node on the mesh. They're discoverable. They're shareable. They're governed in a way. I think the mistake a lot of people made early on in the big data movement is, "Oh, we got data. We have to monetize our data." As opposed to thinking about what products can I build that are based on data that then can lead to monetization? I think the other thing I would say is the business has gotten way too technical. (Dave chuckles) It's alienated a lot of the business lines. I think we're seeing that change, and I think things like Ezmeral that simplify that, are critical. So I'll give you the final thoughts, based on my rant. >> No, your rant is spot on Dave. I think we are in agreement about a lot of things. Governance is absolutely key. If you don't know where your data is, what it's used for, and can apply policies to it. It doesn't matter what technology you throw at it, you're going to end up in the same state that you're essentially in today, with lots of swamps. I did like that concept of a node or a data mesh. It kind of goes back to the similar thing with a service mesh, or a set of APIs that you can use. I think we're going to have something similar with data. The trick is always, how heavy is it? How easy is it to move about? I think there's always going to be that latency issue, maybe not within the data center, but across the WAN. Latency is still going to be key, which means we need to have really good processes to be able to move data around. As you said, govern it. Determine who has access to what, when, and under what conditions, and then allow it to be free. Allow people to bring their choice of tools, provision them how they need to, while providing that audit, compliance and control. And then again, as you need to provision data across those nodes for those use cases, do so in a well measured and governed way. I think that's sort of where things are going. But we keep using that term governance, I think that's so key, and there's nothing better than using open source software because that provides traceability, auditability and this, frankly, openness that allows you to say, "I don't like where this project's going. I want to go in a different direction." And it gives those enterprises a control over these platforms that they've never had before. >> Matt, thanks so much for the discussion. I really enjoyed it. Awesome perspectives. >> Well thank you for having me, Dave. Excellent conversation as always. Thanks for having me again. >> You're very welcome. And thank you for watching everybody. This is theCube's continuous coverage of HPE Discover 2021. Of course, the virtual version. Next year, we're going to be back live. My name is Dave Volante. Keep it right there. (upbeat music)
SUMMARY :
and ensure the productivity of data teams. Good to see you again, Dave. Where do you spend your time? and practices into the data world. and at the time a lot and practices to these types of workloads, and MapR always had the best product, the ability to be forgotten, and if I have to go through this the cloud to generally where it continues to gain momentum, the committers to support it. of cloud or the notion that is going to allow us in most of the enterprises I speak to. You got to be protected. and the big data was all the hot buzzword. of that is there's going to so I'm going to stick with it for a sec. and then allow it to be free. for the discussion. Well thank you for having me, Dave. Of course, the virtual version.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Matt Maccaux | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Cytec | ORGANIZATION | 0.99+ |
Next year | DATE | 0.99+ |
two parts | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
BlueData | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Hadoop | TITLE | 0.99+ |
12 factor | QUANTITY | 0.99+ |
each business unit | QUANTITY | 0.99+ |
GDPR | TITLE | 0.98+ |
Golang | TITLE | 0.98+ |
each group | QUANTITY | 0.98+ |
Ezmeral | ORGANIZATION | 0.97+ |
three | QUANTITY | 0.97+ |
zero trust | QUANTITY | 0.97+ |
single phone number | QUANTITY | 0.96+ |
Ezmeral | PERSON | 0.96+ |
single | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
seven | QUANTITY | 0.95+ |
kumbaya | ORGANIZATION | 0.95+ |
one thing | QUANTITY | 0.93+ |
Big Data | TITLE | 0.91+ |
two things | QUANTITY | 0.9+ |
theCube | ORGANIZATION | 0.9+ |
next 10 years | DATE | 0.89+ |
four dot | QUANTITY | 0.89+ |
first mantra | QUANTITY | 0.89+ |
last 10 years | DATE | 0.88+ |
Ezmeral Software | ORGANIZATION | 0.88+ |
one environment | QUANTITY | 0.88+ |
MapR | ORGANIZATION | 0.87+ |
Scytale | ORGANIZATION | 0.87+ |
next decade | DATE | 0.86+ |
first | QUANTITY | 0.86+ |
Kubernetes | TITLE | 0.86+ |
SPIFFE | TITLE | 0.84+ |
SPIRE | TITLE | 0.83+ |
tier one | QUANTITY | 0.82+ |
Spark | TITLE | 0.8+ |
five megabytes of code | QUANTITY | 0.77+ |
KubeDirector | ORGANIZATION | 0.75+ |
one question | QUANTITY | 0.74+ |
Single hand | QUANTITY | 0.74+ |
years | QUANTITY | 0.73+ |
last decade | DATE | 0.73+ |
2021 | DATE | 0.73+ |
Azure | TITLE | 0.7+ |
Sunil James, Sr Director, HPE [ZOOM]
(bright music) >> Welcome back to HPE Discover 2021. My name is Dave Vellante and you're watching theCUBE's virtual coverage of Discover. We're going to dig into the most pressing topic, not only for IT, but entire organizations. And that's cyber security. With me is Sunil James, senior director of security engineering at Hewlett Packard Enterprise. Sunil, welcome to theCUBE. Come on in. >> Dave, thank you for having me. I appreciate it. >> Hey, you talked about project Aurora today. Tell us about project Aurora, what is that? >> So I'm glad you asked. Project Aurora is a new framework that we're working on that attempts to provide the underpinnings for Zero Trust architectures inside of everything that we build at HPE. Zero Trust is a way of providing a mechanism for enterprises to allow for everything in their enterprise, whether it's a server, a human, or anything in between, to be verified and attested to before they're allowed to access or transact in certain ways. That's what we announced today. >> Well, so in response to a spate of damaging cyber attacks last month, President Biden issued an executive order designed to improve the United States' security posture. And in that order, he essentially issued a Zero Trust mandate. You know, it's interesting, Sunil. Zero Trust has gone from a buzzword to a critical part of a security strategy. So in thinking about a Zero Trust architecture, how do you think about that, and how does project Aurora fit in? >> Yeah, so Zero Trust architecture, as a concept, has been around for quite some time now. And over the last few years, we've seen many a company attempting to provide technologies that they purport to be Zero Trust. Zero Trust is a framework. It's not one technology, it's not one tool, it's not one product. It is an entire framework of thinking and applying cybersecurity principles to everything that we just talked about beforehand. Project Aurora, as I said beforehand, is designed to provide a way for ourselves and our customers to be able to measure, attest, and verify every single piece of technology that we sell to them. Whether it's a server or everything else in between. Now, we've got a long way to go before we're able to cover everything that HPE sells. But for us, these capabilities are the root of Zero Trust architectures. You need to be able to, at any given moment's notice, verify, measure, and attest, and this is what we're doing with project Aurora. >> So you founded a company called Scytale and sold that to HPE last year. And my understanding is you were really the driving force behind the secure production identity framework, but you said Zero Trust is really a framework. That's an open source project. Maybe you can explain what that is. I mean, people talk about the NIST Framework for cybersecurity. How does that relate? Why is this important and how does Aurora fit into it? >> Yeah, so that's a good question. The NIST Framework is a broader framework for cybersecurity that couples and covers many aspects of thinking about the security posture of an enterprise, whether it's network security, host based intrusion detection capabilities, incident response, things of that sort. SPIFFE, which you're referring to, Secure Production Identity Framework For Everyone, is an open source framework and technology base that we did work on when I was the CEO of Scytale, that was designed to provide a platform agnostic way to assign identity to anything that runs in a network. And so think about yourself or myself. We have identities in our back pocket, driver's license, passports, things of that sort. They provide a unique assertion of who we are, and what we're allowed to do. That does not exist in the world of software. And what SPIFFE does is it provides that mechanism so that you can actually use frameworks like project Aurora that can verify the underpinning infrastructure on top of which software workloads run to be able to verify those SPIFFE identities even better than before. >> Is the intent to productize this capability, you know, within this framework? How do you approach this from HPE's standpoint? >> So SPIFFE and SPIRE will and always will be, as far as I'm concerned, remain an open source project held by the Cloud Native Computing Foundation. It's for the world, all right. And we want that to be the case because we think that more of our Enterprise customers are not living in the world of one vendor or two vendors. They have multiple vendors. And so we need to give them the tools and the flexibility to be able to allow for open source capabilities like SPIFFE and SPIRE to provide a way for them to assign these identities and assign policies and control, regardless of the infrastructure choices they make today or tomorrow. HPE recognizes that this is a key differentiating capability for our customers. And our goal is to be able to look at our offerings that power the next generation of workloads. Kubernetes instances, containers, serverless, and anything that comes after that. And our responsibility is to say, "How can we actually take what we have and be able to provide those kinds of assertions, those underpinnings for Zero Trust that are going to be necessary to distribute those identities to those workloads, and to do so in a scalable, effective, and automated manner?" Which is one of the most important things that project Aurora does. >> So a lot of companies, Sunil, will set up a security division. But is the HPE strategy to essentially embed security across its entire portfolio? How should we think about HPE strategy in cyber? >> Yeah, so it's a great question. HPE has a long history in security and other domains, networking, and servers, and storage, and beyond. The way we think about what we're building with project Aurora, this is plumbing. This is plumbing that must be in everything we build. Customers don't buy one product from us and they think it's one company, and something else from us, and they think it's another company. They're buying HPE products. And our goal with project Aurora is to ensure that this plumbing is widely and uniformly distributed and made available. So whether you're buying an Aruba device, a Primera storage device, or a ProLiant server, project Aurora's capabilities are going to provide a consistent way to do the things that I've mentioned beforehand to allow for those Zero Trust architectures to become real. >> So, as I alluded to President Biden's executive order previously. I mean, you're a security practitioner, you're an expert in this area. It just seems as though, and I'd love to get your comments on this. I mean, the adversaries are well-funded, you know, they're either organized crime, they're nation states. They're extracting a lot of very valuable information, they're monetizing that. You've seen things like ransomware as a service now. So any knucklehead can be in the ransomware business. So it's just this endless escalation game. How do you see the industry approaching this? What needs to happen? So obviously I like what you're saying about the plumbing. You're not trying to attack this with a bunch of point tools, which is part of the problem. How do you see the industry coming together to solve this problem? >> Yeah. If you operate in the world of security, you have to operate from the standpoint of humility. And the reason why you have to operate from a standpoint of humility is because the attack landscape is constantly changing. The things, and tools, and investments, and techniques that you thought were going to thwart an attacker today, they're quickly outdated within a week, a month, a quarter, whatever it might be. And so you have to be able to consistently and continuously evolve and adapt towards what customers are facing on any given moment's notice. I think to be able to, as an industry, tackle these issues more and moreso, you need to be able to have all of us start to abide, not abide, but start to adopt these open-source patterns. We recognize that every company, HPE included, is here to serve customers and to make money for its shareholders as well. But in order for us to do that, we have to also recognize that they've got other technologies in their infrastructure as well. And so it's our belief, it's my belief, that allowing for us to support open standards with SPIFFE and SPIRE, and perhaps with some of the aspects of what we're doing with project Aurora, I think allows for other people to be able to kind of deliver the same underpinning capabilities, the plumbing, if you will, regardless of whether it's an HPE product or something else along those lines as well. We need more of that generally across our industry, and I think we're far from it. >> I mean, this sounds like a war. I mean, it's more than a battle, it's a war that actually is never going to end. And I don't think there is an end in sight. And you hear CESOs talk about the shortage of talent, they're getting inundated with point products and tools, and then that just creates more technical debt. It's been interesting to watch. Interesting maybe is not the right word. But the pivot to Zero Trust, endpoint security, cloud security, and the exposure that we've now seen as a result of the pandemic was sort of rushed. And then of course, we've seen, you know, the adversaries really take advantage of that. So, I mean what you're describing is this ongoing never-ending battle, isn't it? >> Yeah, yeah, no, it's going to be ongoing. And by the way, Zero Trust is not the end state, right? I mean, there was things that we called the final nail in the coffin five years ago, 10 years ago, and yet the attackers persevered. And that's because there's a lot of innovation out there. There's a lot of infrastructure moving to dynamic architectures like cloud and others that are going to be poorly configured, and are going to not have necessarily the best and brightest providing security around them. So we have to remain vigilant. We have to work as hard as we can to help customers deploy Zero Trust architectures. But we have to be thinking about what's next. We have to be watching, studying, and evolving to be able to prepare ourselves, to be able to go after whatever the next capabilities are. >> What I like about what you're saying is, you're right. You have to have humility. I don't want to say, I mean, it's hard because I do feel like a lot of times the vendor community says, "Okay, we have the answer," to your point. "Okay, we have a Zero Trust solution." Or, "We have a solution." And there is no silver bullet in this game. And I think what I'm hearing from you is, look we're providing infrastructure, plumbing, the substrate, but it's an open system. It's got to evolve. And the thing you didn't say, but I'd love your thoughts on this is we've got to collaborate with somebody you might think is your competitor. 'Cause they're the good guys. >> Yeah. Our customers don't care that we're competitors with anybody. They care that we're helping them solve their problems for their business. So our responsibility is to figure out what we need to do to work together to provide the basic capabilities that allow for our customers to remain in business, right? If cybersecurity issues plague any of our customers that doesn't affect just HPE, that affects all of the companies that are serving that customer. And so, I think we have a shared responsibility to be able to protect our customers. >> And you've been in cyber for much, if not most of your career, right? >> Correct. >> So I got to ask you, did you have a superhero when you were a kid? Did you have a sort of a, you know, save the world thing going? >> Did I have a, you know, I didn't have a save the world thing going, but I had, I had two parents that cared for the world in many, many ways. They were both in the world of healthcare. And so everyday I saw them taking care of other people. And I think that probably rubbed off in some of the decisions that I make too. >> Well it's awesome. You're doing great work, really appreciate you coming on theCUBE, and thank you so much for your insights. >> I appreciate that, thanks. >> And thank you for being with us for our ongoing coverage of HPE Discover 21. This is Dave Vellante. You're watching theCUBE. The leader in digital tech coverage. We'll be right back. (bright music)
SUMMARY :
Welcome back to HPE Discover 2021. Dave, thank you for having me. Hey, you talked about that attempts to provide the underpinnings Well, so in response to a spate and our customers to be able and sold that to HPE last year. to be able to verify And our goal is to be able But is the HPE strategy to essentially Aurora is to ensure and I'd love to get your comments on this. I think to be able to, as an industry, But the pivot to Zero that are going to be poorly configured, And the thing you didn't say, to be able to protect our customers. I didn't have a save the and thank you so much for your insights. And thank you for being with us
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Sunil James | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
two vendors | QUANTITY | 0.99+ |
Scytale | ORGANIZATION | 0.99+ |
two parents | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
President | PERSON | 0.99+ |
last month | DATE | 0.99+ |
Zero Trust | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
one vendor | QUANTITY | 0.99+ |
five years ago | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
10 years ago | DATE | 0.99+ |
Zero Trust | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Sunil | ORGANIZATION | 0.98+ |
one company | QUANTITY | 0.98+ |
Sunil | PERSON | 0.98+ |
a month | QUANTITY | 0.98+ |
one product | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
a week | QUANTITY | 0.98+ |
SPIFFE | TITLE | 0.97+ |
SPIRE | TITLE | 0.96+ |
one tool | QUANTITY | 0.96+ |
a quarter | QUANTITY | 0.95+ |
pandemic | EVENT | 0.95+ |
Biden | PERSON | 0.94+ |
Aurora | TITLE | 0.93+ |
NIST Framework | TITLE | 0.93+ |
Aurora | ORGANIZATION | 0.88+ |
theCUBE | ORGANIZATION | 0.87+ |
project | ORGANIZATION | 0.87+ |
Zero Trust | TITLE | 0.87+ |
SPIRE | ORGANIZATION | 0.81+ |
United | ORGANIZATION | 0.8+ |
Aruba | LOCATION | 0.77+ |
Project Aurora | TITLE | 0.74+ |
SPIFFE | ORGANIZATION | 0.73+ |
project Aurora | ORGANIZATION | 0.73+ |
Primera | ORGANIZATION | 0.69+ |
single piece | QUANTITY | 0.69+ |
Discover | TITLE | 0.68+ |
Discover 21 | TITLE | 0.68+ |
States' | LOCATION | 0.67+ |
Framework | TITLE | 0.65+ |
CESOs | ORGANIZATION | 0.63+ |
project | TITLE | 0.58+ |
Dave Husak & Dave Larson, HPE | HPE Discover 2020
>> Narrator: From around the globe, it's theCUBE, covering HPE Discover Virtual Experience brought to you by HPE. >> Hi, and welcome back to theCUBE's coverage of HPE Discover 2020 the virtual experience. I'm your host Stu Miniman. I'm really happy to be joined on the program two of our CUBE alumni, we have the Daves from Hewlett Packard labs. Sitting in the screen next to me is Dave Husak he is a fellow and general manager for the Cloudless Initiative. And on the other side of the screen, we have Dave Larson, vice president and CTO of the Cloudless Initiative. Dave and Dave, thank you so much for joining us again. >> Delighted to be here. >> All right, so specifically we're going to be talking a bit about security, obviously, you know, very important in the cloud era. And as we build our native architect, you know, Dave Husak, I guess, why don't you set the stage for us a little bit, of you know, where security fits into, you know, HPE overall and, you know, the mission that you know, last year a lot of buzz and discussion and interest around Cloudless. So just put that as a start and then we'll, get into a lot of discussion about security. >> Right yeah, last year we did, you know, launch the initiative and, you know, we framed it as, it composed of three components, one of which in fact, the most important aspect of which it was the trust fabric Cloudless Trust Fabric, which was you know, built on the idea of intrinsic security for all workload end points, right. And this is a theme that you see playing out, you know, a year later playing out, I think across the industry. You hear that language and that, you know, that kind of idea of being promoted in the context of zero trust, you know, new capabilities being launched by VMware and other kinds of runtime environments, right. And you know, the way I like to say it is that we have entered an era of security first in IT infrastructure. It's no longer going to be practical to build IT infrastructure and then, you know, have products that secure it, right. You know, build perimeters, do micro-segment or anything like that. Workload end points need to be intrinsically secure. And you know, the upshot of that really at this point is that all IT infrastructure companies are security companies now. The you know it, acknowledge it, like it or not, we're all security companies now. And so, you know, a lot of the principles applying in the Cloudless Trust Fabric are those zero trust principles are based on cryptographic, workload, identity, leverage unique aspects of HPs products and infrastructure that we've already been delivering with hardware and Silicon root of trust built into our reliance servers and other capabilities like that. And you know, our mission, my mission is to propel that forward and ensure that HP is, you know, at the forefront of securing everything. >> Yeah, excellent definitely, you know love the security first discussion. Every company we've talked to absolutely security is not only a sea level, but you know, typically board level discussion, I guess my initial feedback, as you would say, if every company today is a security company, many of them might not be living up to the expectation just yet So Dave Larson, let's say, you know, applications are, you know, at the core of what we've look at it in cloud native. It's new architectures, new design principles. So give us some, what is HPE thoughts and stuff, how security fits into that, and what's different from how we might've thought about security in the past the applications? Well, I think Dave touched on it, right? From a trust fabric perspective, we have to think of moving to something where the end points themselves, whether their workloads or services are actually intrinsically secure and that we can instantiate some kind of a zero trust framework that really benefits the applications. It really isn't sufficient to do intermediate inspection. In fact, the real, the primary reason why that's no longer possible is that the world is moving too encryption everywhere. And as soon as all packets are encrypted in flight, not withstanding claims to the contrary, it's virtually impossible to do any kind of inference on the flows to apply any meaningful security. But the way we see it is that the transition is moving to a modality where all services, all workloads, all endpoints can be mutually attested, cryptographically identified in a way that allows a zero trust model to emerge so that all end points can know what they are speaking to on the remote end and by authorization principals determine whether or not they're allowed to speak to those. So from a HPE perspective, the area where we build is from the bottom up, we have a Silicon root of trust in our server platform. It's part of our ILO five Integrated lights out baseboard management controller. We can actually deliver a discreet and measurable identity for the hardware and projected up into the workload, into the software realm. >> Excellent, Ty I heard you mentioned identity makes me think of the Cytel acquisition that the HPE made early this year, people in the cloud native community into CubeCon you know, SPIFFE of course, is a project that had gotten quite a bit of attention. Can give us a little bit as to how that acquisition fits into this overall discussion we were just having? >> Oh yeah, so we acquired Cytel into the initiative, beginning of this year. As you, understand Stu, right. Cryptographic identity is fundamental to zero trust security because we're no longer, like Dave pointed out we're no longer relying, on intermediary devices, firewalls, or other kinds of functions to manage, you know, authorize those communications. So the idea of building cryptographic identity into all workload endpoints, devices and data is sort of a cornerstone of any zero trust security strategy. We were delighted to bring the team on board. Not only from the standpoint that they are the world's experts, original contributors, and moderators and committers in the stewardship of SPIFFE and SPIRE the two projects in the CNCF. But you know, the impact they're going to have on the HPs product development, hardware and software is going to be outsized. And it also, you know, as a, I'll have to point this out as well, you know, It is the, this is the most prominent open source project that HP is now stewarding, right. In terms of its acceptance, of SPIFFE and SPIRE, or both poised to be I have an announcement here shortly, probably. But we expect they're going to be promoted to the incubating phase of CNCF maturity from the Sandbox is actually one of the first Sandbox projects in the CNCF. And so it's going to join that Pantheon of know, you know, top few dozen out of I think 1,390 projects in the CNCF. So like you pointed out Stu you know, SPIFFE and SPIRE are right now, you know, the world's leading candidate as, you know, sort of the certificate standard for cryptographic workload endpoint identity. And we're looking at that as a very fundamental enabling technology for this transformation, that the industry is going to go through. >> Yeah, it's really interesting if we pull on that open source thread a little bit more, you know, I think back to earlier in my career, you know, 15, 20 years ago, and if you talk to a CIO, you know, security might be important to them, but they keep what they're building and how their IT infrastructure, is something that they keep very understood. And if you were a vendor supplying to them, you had to be under NDA to understand, because that was a differentiation. Now we're talking about lifting cloud, we're talking about open source, you know, even when I talked to the financial institutions, they're all talking amongst themselves the how do we share best practices because it's not, am I secure? It's we all need to be secure. I wonder if you can comment a little bit on that trend, you know, how the role of open source. Yeah, this is an extension of Kerckhoffs's principle, right? The idea that a security system has to be secure, even if you know the system, right. That's it's only the contents of the ease in the communication letter, that are important. And that is playing out, at the highest level in our industry now, right. So it is, like I said, cryptographic identity and identity based encryption are the cornerstones of building a zero trust fabric. You know, one of the other things is, cause you mentioned that, we also observed is that the CNCF, the Apache foundation. The other thing that's, I think a contrast to 15 years ago, right back 15, 20 years ago, open source was a software development phenomenon, right. Where, you know, the usual idea, you know, there's repositories of code, you pull them down, you modify them for your own particular purposes and you upstream this, the changes and such, right. It's less about that now. It is much more a model for open source operations than it is a model for open source development. Most of the people that are pulling down those repositories unless they are using them, they're not modifying them, right. And as you also, I think understand, right. The framework of the CNCF landscape comprehensive, right? You can build an entire IT infrastructure operations environment by you know, taking storage technologies, security technologies, monitoring management, you know, it's complete, right. And it is, you know, becoming really, you know, a major operational discipline out there in the world to harness all of that development harness, the open source communities. Not only in the software, not only in the security space, but I think you know comprehensively and that engine of growth and development is I think probably the largest, you know manpower and brainpower, and you know, operational kind of active daily users model out there now, right. And, it's going to be critical. I think for the decade, this decade that's coming. That the successful IT infrastructure companies have to be very tightly engaged with those communities in that process, because open source operations is the new thing. It's like, you know DevOps became OpsDev or something like that is the trend. >> Yeah, and I'm glad you brought that up you know I think about the DevOps movement, really fused security, it can't be a bolt on it can't be an afterthought. The mantra I've heard over the last few years, is security is everyone's responsibility. Dave Larson, you know, the question I have for you is, how do we make sure, you know, policy is enforced you know, even I think about an organization everyone's responsible for it, you know, who's actually making sure that things happen because, you know, if everybody's looking after it, it should be okay. But, you know, bring us down a little bit from the application standpoint. >> Well, I would say, you know, first of all, you have to narrow the problem down, right? The more we try to centralize security with discreet appliances, that's some kind of a choke point, the explosion, the common editorial explosion of policy declaratives that are necessary in order to achieve that problem to achieve the solution becomes untenable, right? There is no way to achieve the right kind of policy enforcement unless we get as close to the actual workloads themselves, unless we implement a zero trust model where only known and authorized end points are allowed to communicate with each other, you know. We've lived with a really unfortunate situation in the internet at large, for the last couple of decades where an IP address is both a location and an identifier. This is problem because that can be abused. it's something that can be changed. It's something that is easily spoofed, and frankly the nature of that element of the way we connect applications together is the way that almost virtually all exploits, get into the environment and cause problems. If we move to a zero trust model where the individual end points will only speak with only respond to something that is authorized and only things that are authorized and they trust nothing else, we eliminate 95 to 99% of them problem. And we are in an automated stance that will allow us to have much better assurance of the security of the connections between the various endpoints and services. >> Excellent, so, you know, one of the questions that always comes up, some of the pieces we're talking about here are open source. You talk about security and trust across multiple environments. How does HPE differentiate from, you know, everything else out there and, you know, how are you taking the leadership position? I'd love to hear both of your commentary on that. >> Yeah, well, like I said, initially, the real differentiation for us is that HPE was the market leader for industry standard servers, from a security perspective. Three years ago in our ProLiant gen 10 servers, when we announced them, they had the Silicon root of trust and we've shipped more than a million and a half servers into the market with this capability that is unique in the market. And we've been actively extending that capability so that we can project the identity, not just to the actual hardware itself, but that we can bind it in a multi-factor sense, the individual software components that are hosted on that server, whether it's the operating system, a hypervisor, a VM, a container framework, or an actual container, or a piece of it code from a serverless perspective. All of those things need to be able to be identified and we can bring a multi-factor identity capability to individual workloads that can be the underpinning for this zero across connection capability. >> Great and David, anything you'd like to add there? >> No, like what he said I think HP is uniquely positioned you know, the depth and the breadth of our installed base of platforms that are already zero trust ready, if you will, right. Coupled with the identity technology that we're developing in the context of the Cytel acquisition and David, my work in a building, the cloudless trust fabric, you know, are the, like I said, the cornerstones of these architectures, right? And HP has a couple of unfair advantages here you know, okay breadth and depth of our, the customer base and the installed base of the system is already put out there. While the world is transitioning, you know, inevitably to these, you know, these kinds of security architectures, these kinds of IT infrastructure architectures, HP has a, you know, a leadership team position by default here that we can take advantage of. And our customers can reap the benefits of without, well, you know, without you know, rebuilding forklift upgrading, or otherwise, you know, it is, yeah as Dave talked about, you know, a lot will change, right. There's more to do, right? As we move from, you know, IP addresses and port numbers, as identities for security, because we know that perimeter security, network security like that is busted, right. It is, you know, every headline making, you know, kind of advanced persistent threat kind of vulnerabilities it's all at the root of all those problems, right. There are technologies like OPA, right you know, policy has to be reframed in the context of workload identity, not in network identity know. Like call this legal sort of the microsegmentation fallacy, right. You know that, you know, perimeters are broken, not a valid security strategy anymore. So the answer can't be, let's just draw smaller perimeters, especially since we're now filling them up with evermore, you know, dynamic evanescent kind of workload endpoints, you know, containers coming and going at a certain pace. And serverless instances, right. All of those things springing up and, and being torn down, you know, on, you know, very short life cycle that's right. It is inconceivable that traditional, you know perimeter based micro-segmentation based security frameworks can keep up with the competent tutorial explosion and the pace with which we are going to be where, you know, orchestration frameworks are going to be deploying these end points. There are, you know, there's a lot more to do, you know, but this is, the transformation story. This is of the 2020s, you know, infrastructure, IT infrastructure school is very different in two, five, 10 years from now than it does today. And you know that's you know we believe HP has, like I said, a few unfair advantages to lead the world in terms of those transformations. >> Excellent, well, appreciate the look towards the future as well as where we are today. Dave and Dave, thanks so much for joining. Thank you, Stu. >> Thanks, dude, pleasure. >> All right, we'll be back with lots more coverage. HPE Discover 2020 the Virtual Experience. I'm Stu Miniman and thank you for watching theCUBE. (upbeat music)
SUMMARY :
brought to you by HPE. Dave and Dave, thank you so that you know, last year a You hear that language and that, you know, is not only a sea level, but you know, community into CubeCon you know, SPIFFE and SPIRE are right now, you know, And it is, you know, that things happen because, you know, you know, first of all, out there and, you know, that can be the underpinning going to be where, you know, the look towards the future you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Larson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dave Husak | PERSON | 0.99+ |
Cytel | ORGANIZATION | 0.99+ |
95 | QUANTITY | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
Cloudless Initiative | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
1,390 projects | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.98+ |
2020s | DATE | 0.98+ |
a year later | DATE | 0.98+ |
15 years ago | DATE | 0.98+ |
five | QUANTITY | 0.98+ |
99% | QUANTITY | 0.98+ |
more than a million and a half servers | QUANTITY | 0.98+ |
two projects | QUANTITY | 0.98+ |
ILO | ORGANIZATION | 0.98+ |
Three years ago | DATE | 0.97+ |
Sandbox | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
HPs | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
early this year | DATE | 0.96+ |
15 | DATE | 0.96+ |
this year | DATE | 0.96+ |
DevOps | TITLE | 0.94+ |
CubeCon | ORGANIZATION | 0.94+ |
10 years | QUANTITY | 0.93+ |
three components | QUANTITY | 0.93+ |
Kerckhoffs | PERSON | 0.92+ |
zero | QUANTITY | 0.91+ |
SPIFFE | ORGANIZATION | 0.91+ |
theCUBE | ORGANIZATION | 0.9+ |
zero trust | QUANTITY | 0.89+ |
first discussion | QUANTITY | 0.88+ |
Stu | PERSON | 0.87+ |
ProLiant gen 10 | COMMERCIAL_ITEM | 0.84+ |
OpsDev | TITLE | 0.83+ |
20 years ago | DATE | 0.83+ |
CTO | PERSON | 0.82+ |
Cloudless | ORGANIZATION | 0.81+ |