Image Title

Search Results for 40 OS:

Is Supercloud an Architecture or a Platform | Supercloud2


 

(electronic music) >> Hi everybody, welcome back to Supercloud 2. I'm Dave Vellante with my co-host John Furrier. We're here at our tricked out Palo Alto studio. We're going live wall to wall all day. We're inserting a number of pre-recorded interviews, folks like Walmart. We just heard from Nir Zuk of Palo Alto Networks, and I'm really pleased to welcome in David Flynn. David Flynn, you may know as one of the people behind Fusion-io, completely changed the way in which people think about storing data, accessing data. David Flynn now the founder and CEO of a company called Hammerspace. David, good to see you, thanks for coming on. >> David: Good to see you too. >> And Dr. Nelu Mihai is the CEO and founder of Cloud of Clouds. He's actually built a Supercloud. We're going to get into that. Nelu, thanks for coming on. >> Thank you, Happy New Year. >> Yeah, Happy New Year. So I'm going to start right off with a little debate that's going on in the community if you guys would bring out this slide. So Bob Muglia early today, he gave a definition of Supercloud. He felt like we had to tighten ours up a little bit. He said a Supercloud is a platform, underscoring platform, that provides programmatically consistent services hosted on heterogeneous cloud providers. Now, Nelu, we have this shared doc, and you've been in there. You responded, you said, well, hold on. Supercloud really needs to be an architecture, or else we're going to have this stove pipe of stove pipes, really. And then you went on with more detail, what's the information model? What's the execution model? How are users going to interact with Supercloud? So I start with you, why architecture? The inference is that a platform, the platform provider's responsible for the architecture? Why does that not work in your view? >> No, the, it's a very interesting question. So whenever I think about platform, what's the connotation, you think about monolithic system? Yeah, I mean, I don't know whether it's true or or not, but there is this connotation of of monolithic. On the other hand, if you look at what's a problem right now with HyperClouds, from the customer perspective, they're very complex. There is a heterogeneous world where actually every single one of this HyperClouds has their own architecture. You need rocket scientists to build a cloud applications. Always there is this contradiction between cost and performance. They fight each other. And I'm quoting here a former friend of mine from Bell Labs who work at AWS who used to say "Cloud is cheap as long as you don't use it too much." (group chuckles) So clearly we need something that kind of plays from the principle point of view the role of an operating system, that seats on top of this heterogeneous HyperCloud, and there's nothing wrong by having these proprietary HyperClouds, think about processors, think about operating system and so on, so forth. But in order to build a system that is simple enough, I think we need to go deeper and understand. >> So the argument, the counterargument to that, David, is you'll never get there. You need a proprietary system to get to market sooner, to solve today's problem. Now I don't know where you stand on this platform versus architecture. I haven't asked you, but. >> I think there are aspects of both for sure. I mean it needs to be an architecture in the sense that it's broad based and open and so forth. But you know, platform, you could say as long as people can instantiate it themselves, on their own infrastructure, as long as it's something that can be deployed as, you know, software defined, you don't want the concept of platform being the monolith, you know, combined hardware and software. So it really depends on what you're focused on when you're saying platform, you know, I'd say as long as they software defined thing, to where it can literally run anywhere. I mean, because I really think what we're talking about here is the original concept of cloud computing. The ability to run anything anywhere, without having to care about the physical infrastructure. And what we have today is not that, the cloud today is a big mainframe in the sky, that just happens to be large enough that once you select which region, generally you have enough resources. But, you know, nowadays you don't even necessarily have enough resources in one region. and then you're kind of stuck. So we haven't really gotten to that utility model of computing. And you're also asked to rewrite your application, you know, to abandon the conveniences of high performance file access. You got to rewrite it to use object storage stuff. We have to get away from that. >> Okay, I want to just drill on that, 'cause I think I like that point about, there's not enough availability, but on the developer cloud, the original AWS premise was targeting developers, 'cause at that time, you have to provision a Sun box get a Cisco DSU/CSU, now you get on the cloud. But I think you're giving up the scale question, 'cause I think right now, scale is huge, enterprise grade versus cloud for developers. >> That's Right. >> Because I mean look at, Amazon, Azure, they got compute, they got storage, they got queuing, and some stuff. If you're doing a startup, you throw your app up there, localhost to cloud, no big deal. It's the scale thing that gets me- >> And you can tell by the fact that, in regions that are under high demand, right, like in London or LA, at least with the clients we work with in the median entertainment space, it costs twice as much for the exact same cloud instances that do the exact same amount of work, as somewhere out in rural Canada. So why is it you have such a cost differential, it has to do with that supply and demand, and the fact that the clouds aren't really the ability to run anything anywhere. Even within the same cloud vendor, you're stuck in a specific region. >> And that was never the original promise, right? I mean it was, we turned it into that. But the original promise was get rid of the heavy lifting of IT. >> Not have to run your own, yeah, exactly. >> And then it became, wow, okay I can run anywhere. And then you know, it's like web 2.0. You know people say why Supercloud, you and I talked about this, why do you need a name for Supercloud? It's like web 2.0. >> It's what Cloud was supposed to be. >> It's what cloud was supposed to be, (group laughing and talking) exactly, right. >> Cloud was supposed to be run anything anywhere, or at least that's what we took it as. But you're right, originally it was just, oh don't have to run your own infrastructure, and you can choose somebody else's infrastructure. >> And you did that >> But you're still bound to that. >> Dave: And People said I want more, right? >> But how do we go from here? >> That's, that's actually, that's a very good point, because indeed when the first HyperClouds were designed, were designed really focus on customers. I think Supercloud is an opportunity to design in the right way. Also having in mind the computer science rigor. And we should take advantage of that, because in fact actually, if cloud would've been designed properly from the beginning, probably wouldn't have needed Supercloud. >> David: You wouldn't have to have been asked to rewrite your application. >> That's correct. (group laughs) >> To use REST interfaces to your storage. >> Revisist history is always a good one. But look, cloud is great. I mean your point is cloud is a good thing. Don't hold it back. >> It is a very good thing. >> Let it continue. >> Let it go as as it is. >> Yeah, let that thing continue to grow. Don't impose restrictions on the cloud. Just refactor what you need to for scale or enterprise grade or availability. >> And you would agree with that, is that true or is it problem you're solving? >> Well yeah, I mean it, what the cloud is doing is absolutely necessary. What the public cloud vendors are doing is absolutely necessary. But what's been missing is how to provide a consistent interface, especially to persistent data. And have it be available across different regions, and across different clouds. 'cause data is a highly localized thing in current architecture. It only exists as rendered by the storage system that you put it in. Whether that's a legacy thing like a NetApp or an Isilon or even a cloud data service. It's localized to a specific region of the cloud in which you put that. We have to delocalize data, and provide a consistent interface to it across all sites. That's high performance, local access, but to global data. >> And so Walmart earlier today described their, what we call Supercloud, they call it the Walmart cloud native platform. And they use this triplet model. They have AWS and Azure, no, oh sorry, no AWS. They have Azure and GCP and then on-prem, where all the VMs live. When you, you know, probe, it turns out that it's only stateless in the cloud. (John laughs) So, the state stuff- >> Well let's just admit it, there is no such thing as stateless, because even the application binaries and libraries are state. >> Well I'm happy that I'm hearing that. >> Yeah, okay. >> Because actually I have a lot of debate (indistinct). If you think about no software running on a (indistinct) machine is stateless. >> David: Exactly. >> This is something that was- >> David: And that's data that needs to be distributed and provided consistently >> (indistinct) >> Across all the clouds, >> And actually, it's a nonsense, but- >> Dave: So it's an illusion, okay. (group talks over each other) >> (indistinct) you guys talk about stateless. >> Well, see, people make the confusion between state and persistent state, okay. Persistent state it's a different thing. State is a different thing. So, but anyway, I want to go back to your point, because there's a lot of debate here. People are talking about data, some people are talking about logic, some people are talking about networking. In my opinion is this triplet, which is data logic and connectivity, that has equal importance. And actually depending on the application, can have the center of gravity moving towards data, moving towards what I call execution units or workloads. And connectivity is actually the most important part of it. >> David: (indistinct). >> Some people are saying move the logic towards the data, some other people, and you are saying actually, that no, you have to build a distributed data mesh. What I'm saying is actually, you have to consider all these three variables, all these vector in order to decide, based on application, what's the most important. Because sometimes- >> John: So the application chooses >> That's correct. >> Well it it's what operating systems were in the past, was principally the thing that runs and manages the jobs, the job scheduler, and the thing that provides your persistent data (indistinct). >> Okay. So we finally got operating system into the equation, thank you. (group laughs) >> Nelu: I actually have a PhD in operating system. >> Cause what we're talking about is an operating system. So forget platform or architecture, it's an operating environment. Let's use it as a general term. >> All right. I think that's about it for me. >> All right, let's take (indistinct). Nelu, I want ask you quick, 'cause I want to give a, 'cause I believe it's an operating system. I think it's going to be a reset, refactored. You wrote to me, "The model of Supercloud has to be open theoretical, has to satisfy the rigors of computer science, and customer requirements." So unique to today, if the OS is going to be refactored, it's not going to be, may or may not be Red Hat or somebody else. This new OS, obviously requirements are for customers too but is what's the computer science that is needed? Where are we, what's the missing? Where's the science in this shift? It's not your standard OS it's not like an- (group talks over each other) >> I would beg to differ. >> (indistinct) truly an operation environment. But the, if you think about, and make analogies, what you need when you design a distributed system, well you need an information model, yeah. You need to figure out how the data is located and distributed. You need a model for the execution units, and you need a way to describe the interactions between all these objects. And it is my opinion that we need to go deeper and formalize these operations in order to make a step forward. And when we design Supercloud, and design something that is better than the current HyperClouds. And actually that is when we design something better, you make a system more efficient and it's going to be better from the cost point of view, from the performance point of view. But we need to add some math into all this customer focus centering and I really admire AWS and their executive team focusing on the customer. But now it's time to go back and see, if we apply some computer science, if you try to formalize to build a theoretical model of cloud, can we build a system that is better than existing ones? >> So David, how do you- >> this is what I'm saying. >> That's a good question >> How do You see the operating system of a, or operating environment of a decentralized cloud? >> Well I think it's layered. I mean we have operating systems that can run systems quite efficiently. Linux has sort of one in the data center, but we're talking about a layer on top of that. And I think we're seeing the emergence of that. For example, on the job scheduling side of things, Kubernetes makes a really good example. You know, you break the workload into the most granular units of compute, the containerized microservice, and then you use a declarative model to state what is needed and give the system the degrees of freedom that it can choose how to instantiate it. Because the thing about these distributed systems, is that the complexity explodes, right? Running a piece of hardware, running a single server is not a problem, even with all the many cores and everything like that. It's when you start adding in the networking, and making it so that you have many of them. And then when it's going across whole different data centers, you know, so, at that level the way you solve this is not manually (group laughs) and not procedurally. You have to change the language so it's intent based, it's a declarative model, and what you're stating is what is intended, and you're leaving it to more advanced techniques, like machine learning to decide how to instantiate that service across the cluster, which is what Kubernetes does, or how to instantiate the data across the diverse storage infrastructure. And that's what we do. >> So that's a very good point because actually what has been neglected with HyperClouds is really optimization and automation. But in order to be able to do both of these things, you need, I'm going back and I'm stubborn, you need to have a mathematical model, a theoretical model because what does automation mean? It means that we have to put machines to do the work instead of us, and machines work with what? Formula, with algorithms, they don't work with services. So I think Supercloud is an opportunity to underscore the importance of optimization and automation- >> Totally agree. >> In HyperCloud, and actually by doing that, we can also have an interesting connotation. We are also contributing to save our planet, because if you think right now. we're consuming a lot of energy on this HyperClouds and also all this AI applications, and I think we can do better and build the same kind of application using less energy. >> So yeah, great point, love that call out, the- you know, Dave and I always joke about the old, 'cause we're old, we talk about, you know, (Nelu Laughs) old history, OS/2 versus DOS, okay, OS's, OS/2 is silly better, first threaded OS, DOS never went away. So how does legacy play into this conversation? Because I buy the theoretical, I love the conversation. Okay, I think it's an OS, totally see it that way myself. What's the blocker? Is there a legacy that drags it back? Is the anchor dragging from legacy? Is there a DOS OS/2 moment? Is there an opportunity to flip the script? This is- >> I think that's a perfect example of why we need to support the existing interfaces, Operating Systems, real operating systems like Linux, understands how to present data, it's called a file system, block devices, things that that plumb in there. And by, you know, going to a REST interface and S3 and telling people they have to rewrite their applications, you can't even consume your application binaries that way, the OS doesn't know how to pull that sort of thing. So we, to get to cloud, to get to the ability to host massive numbers of tenants within a centralized infrastructure, you know, we abandoned these lower level interfaces to the OS and we have to go back to that. It's the reason why DOS ultimately won, is it had the momentum of the install base. We're seeing the same thing here. Whatever it is, it has to be a real file system and not a come down file system >> Nelu, what's your reaction, 'cause you're in the theoretical bandwagon. Let's get your reaction. >> No, I think it's a good, I'll give, you made a good analogy between OS/2 and DOS, but I'll go even farther saying, if you think about the evolution operating system didn't stop the evolution of underlying microprocessors, hardware, and so on and so forth. On the contrary, it was a catalyst for that. So because everybody could develop their own hardware, without worrying that the applications on top of operating system are going to modify. The same thing is going to happen with Supercloud. You're going to have the AWSs, you're going to have the Azure and the the GCP continue to evolve in their own way proprietary. But if we create on top of it the right interface >> The open, this is why open is important. >> That's correct, because actually you're going to see sometime ago, everybody was saying, remember venture capitals were saying, "AWS killed the world, nobody's going to come." Now you see what Oracle is doing, and then you're going to see other players. >> It's funny, Amazon's trying to be more like Microsoft. Microsoft's trying to be more like Amazon and Google- Oracle's just trying to say they have cloud. >> That's, that's correct, (group laughs) so, my point is, you're going to see a multiplication of this HyperClouds and cloud technology. So, the system has to be open in order to accommodate what it is and what is going to come. Okay, so it's open. >> So the the legacy- so legacy is an opportunity, not a blocker in your mind. And you see- >> That's correct, I think we should allow them to continue to to to be their own actually. But maybe you're going to find a way to connect with it. >> Amazon's the processor, and they're on the 80 80 80 right? >> That's correct. >> You're saying you love people trying to get put to work. >> That's a good analogy. >> But, performance levels you say good luck, right? >> Well yeah, we have to be able to take traditional applications, high performance applications, those that consume file system and persistent data. Those things have to be able to run anywhere. You need to be able to put, put them onto, you know, more elastic infrastructure. So, we have to actually get cloud to where it lives up to its billing. >> And that's what you're solving for, with Hammerspace, >> That's what we're solving for, making it possible- >> Give me the bumper sticker. >> Solving for how do you have massive quantities of unstructured file data? At the end of the day, all data ultimately is unstructured data. Have that persistent data available, across any data center, within any cloud, within any region on-prem, at the edge. And have not just the same APIs, but have the exact same data sets, and not sucked over a straw remote, but at extreme high performance, local access. So how do you have local access to globally shared distributed data? And that's what we're doing. We are orchestrating data globally across all different forms of storage infrastructure, so you have a consistent access at the highest performance levels, at the lowest level innate built into the OS, how to consume it as (indistinct) >> So are you going into the- all the clouds and natively building in there, or are you off cloud? >> So This is software that can run on cloud instances and provide high performance file within the cloud. It can take file data that's on-prem. Again, it's software, it can run in virtual or on physical servers. And it abstracts the data from the existing storage infrastructure, and makes the data visible and consumable and orchestratable across any of it. >> And what's the elevator pitch for Cloud of Cloud, give that too. >> Well, Cloud of Clouds creates a theoretical model of cloud, and it describes every single object in the cloud. Where is data, execution units, and connectivity, with one single class of very simple object. And I can, I can give you (indistinct) >> And the problem that solves is what? >> The problem that solves is, it creates this mathematical model that is necessary in order to do other interesting things, such as optimization, using sata engines, using automation, applying ML for instance. Or deep learning to automate all this clouds, if you think about in the industrial field, we know how to manage and automate huge plants. Why wouldn't it do the same thing in cloud? It's the same thing you- >> That's what you mean by theoretical model. >> Nelu: That's correct. >> Lay out the architecture, almost the bones of skeleton or something, or, and then- >> That's correct, and then on top of it you can actually build a platform, You can create your services, >> when you say math, you mean you put numbers to it, you kind of index it. >> You quantify this thing and you apply mathematical- It's really about, I can disclose this thing. It's really about describing the cloud as a knowledge graph for every single object in the graph for node, an edge is a vector. And then once you have this model, then you can apply the field theory, and linear algebra to do operation with these vectors. And it's, this creates a very interesting opportunity to let the math do this thing for us. >> Okay, so what happens with hyperscale, or it's like AWS in your model. >> So in, in my model actually, >> Are they happy with this, or they >> I'm very happy with that. >> Will they be happy with you? >> We create an interface to every single HyperCloud. We actually, we don't need to interface with the thousands of APIs, but you know, if we have the 80 20 rule, and we map these APIs into this graph, and then every single operation that is done in this graph is done from the beginning, in an optimized manner and also automation ready. >> That's going to be great. David, I want us to go back to you before we close real quick. You've had a lot of experience, multiple ventures on the front end. You talked to a lot of customers who've been innovating. Where are the classic (indistinct)? Cause you, you used to sell and invent product around the old school enterprises with storage, you know that that trajectory storage is still critical to store the data. Where's the classic enterprise grade mindset right now? Those customers that were buying, that are buying storage, they're in the cloud, they're lifting and shifting. They not yet put the throttle on DevOps. When they look at this Supercloud thing, Are they like a deer in the headlights, or are they like getting it? What's the, what's the classic enterprise look like? >> You're seeing people at different stages of adoption. Some folks are trying to get to the cloud, some folks are trying to repatriate from the cloud, because they've realized it's better to own than to rent when you use a lot of it. And so people are at very different stages of the journey. But the one thing that's constant is that there's always change. And the change here has to do with being able to change the location where you're doing your computing. So being able to support traditional workloads in the cloud, being able to run things at the edge, and being able to rationalize where the data ought to exist, and with a declarative model, intent-based, business objective-based, be able to swipe a mouse and have the data get redistributed and positioned across different vendors, across different clouds, that, we're seeing that as really top of mind right now, because everybody's at some point on this journey, trying to go somewhere, and it involves taking their data with them. (John laughs) >> Guys, great conversation. Thanks so much for coming on, for John, Dave. Stay tuned, we got a great analyst power panel coming right up. More from Palo Alto, Supercloud 2. Be right back. (bouncy music)

Published Date : Jan 18 2023

SUMMARY :

and I'm really pleased to And Dr. Nelu Mihai is the CEO So I'm going to start right off On the other hand, if you look at what's So the argument, the of platform being the monolith, you know, but on the developer cloud, It's the scale thing that gets me- the ability to run anything anywhere. of the heavy lifting of IT. Not have to run your And then you know, it's like web 2.0. It's what Cloud It's what cloud was supposed to be, and you can choose somebody bound to that. Also having in mind the to rewrite your application. That's correct. I mean your point is Yeah, let that thing continue to grow. of the cloud in which you put that. So, the state stuff- because even the application binaries If you think about no software running on Dave: So it's an illusion, okay. (indistinct) you guys talk And actually depending on the application, that no, you have to build the job scheduler, and the thing the equation, thank you. a PhD in operating system. about is an operating system. I think I think it's going to and it's going to be better at that level the way you But in order to be able to and build the same kind of Because I buy the theoretical, the OS doesn't know how to Nelu, what's your reaction, of it the right interface The open, this is "AWS killed the world, to be more like Microsoft. So, the system has to be open So the the legacy- to continue to to to put to work. You need to be able to put, And have not just the same APIs, and makes the data visible and consumable for Cloud of Cloud, give that too. And I can, I can give you (indistinct) It's the same thing you- That's what you mean when you say math, and linear algebra to do Okay, so what happens with hyperscale, the thousands of APIs, but you know, the old school enterprises with storage, and being able to rationalize Stay tuned, we got a

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

JohnPERSON

0.99+

NeluPERSON

0.99+

David FlynnPERSON

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LondonLOCATION

0.99+

John FurrierPERSON

0.99+

LALOCATION

0.99+

Bob MugliaPERSON

0.99+

OS/2TITLE

0.99+

Nir ZukPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HammerspaceORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Bell LabsORGANIZATION

0.99+

Nelu MihaiPERSON

0.99+

DOSTITLE

0.99+

AWSsORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

twiceQUANTITY

0.99+

CiscoORGANIZATION

0.99+

todayDATE

0.99+

CanadaLOCATION

0.99+

bothQUANTITY

0.99+

Palo AltoLOCATION

0.99+

SupercloudORGANIZATION

0.99+

Nelu LaughsPERSON

0.98+

thousandsQUANTITY

0.98+

firstQUANTITY

0.97+

LinuxTITLE

0.97+

HyperCloudTITLE

0.97+

Cloud of CloudTITLE

0.97+

oneQUANTITY

0.96+

Cloud of CloudsORGANIZATION

0.95+

GCPTITLE

0.95+

AzureTITLE

0.94+

three variablesQUANTITY

0.94+

one single classQUANTITY

0.94+

single serverQUANTITY

0.94+

tripletQUANTITY

0.94+

one regionQUANTITY

0.92+

NetAppTITLE

0.92+

DOS OS/2TITLE

0.92+

AzureORGANIZATION

0.92+

earlier todayDATE

0.92+

Cloud of CloudsTITLE

0.91+

Breaking Analysis: CIOs in a holding pattern but ready to strike at monetization


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> Recent conversations with IT decision makers show a stark contrast between exiting 2023 versus the mindset when we were leaving 2022. CIOs are generally funding new initiatives by pushing off or cutting lower priority items, while security efforts are still being funded. Those that enable business initiatives that generate revenue or taking priority over cleaning up legacy technical debt. The bottom line is, for the moment, at least, the mindset is not cut everything, rather, it's put a pause on cleaning up legacy hairballs and fund monetization. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we tap recent discussions from two primary sources, year-end ETR roundtables with IT decision makers, and CUBE conversations with data, cloud, and IT architecture practitioners. The sources of data for this breaking analysis come from the following areas. Eric Bradley's recent ETR year end panel featured a financial services DevOps and SRE manager, a CSO in a large hospitality firm, a director of IT for a big tech company, the head of IT infrastructure for a financial firm, and a CTO for global travel enterprise, and for our upcoming Supercloud2 conference on January 17th, which you can register free by the way, at supercloud.world, we've had CUBE conversations with data and cloud practitioners, specifically, heads of data in retail and financial services, a cloud architect and a biotech firm, the director of cloud and data at a large media firm, and the director of engineering at a financial services company. Now we've curated commentary from these sources and now we share them with you today as anecdotal evidence supporting what we've been reporting on in the marketplace for these last couple of quarters. On this program, we've likened the economy to the slingshot effect when you're driving, when you're cruising along at full speed on the highway, and suddenly you see red brake lights up ahead, so, you tap your own brakes and then you speed up again, and traffic is moving along at full speed, so, you think nothing of it, and then, all of a sudden, the same thing happens. You slow down to a crawl and you start wondering, "What the heck is happening?" And you become a lot more cautious about the rate of acceleration when you start moving again. Well, that's the trend in IT spend right now. Back in June, we reported that despite the macro headwinds, CIOs were still expecting 6% to 7% spending growth for 2022. Now that was down from 8%, which we reported at the beginning of 2022. That was before Ukraine, and Fed tightening, but given those two factors, you know that that seemed pretty robust, but throughout the fall, we began reporting consistently declining expectations where CIOs are now saying Q4 will come in at around 3% growth relative to last year, and they're expecting, or should we say hoping that it pops back up in 2023 to 4% to 5%. The recent ETR panelists, when they heard this, are saying based on their businesses and discussions with their peers, they could see low single digit growth for 2023, so, 1%, 2%, 3%, so, this sort of slingshotting, or sometimes we call it a seesaw economy, has caught everyone off guard. Amazon is a good example of this, and there are others, but Amazon entered the pandemic with around 800,000 employees. It doubled that workforce during the pandemic. Now, right before Thanksgiving in 2022, Amazon announced that it was laying off 10,000 employees, and, Jassy, the CEO of Amazon, just last week announced that number is now going to grow to 18,000. Now look, this is a rounding error at Amazon from a headcount standpoint and their headcount remains far above 2019 levels. Its stock price, however, does not and it's back down to 2019 levels. The point is that visibility is very poor right now and it's reflected in that uncertainty. We've seen a lot of layoffs, obviously, the stock market's choppy, et cetera. Now importantly, not everything is on hold, and this downturn is different from previous tech pullbacks in that the speed at which new initiatives can be rolled out is much greater thanks to the cloud, and if you can show a fast return, you're going to get funding. Organizations are pausing on the cleanup of technical debt, unless it's driving fast business value. They're holding off on modernization projects. Those business enablement initiatives are still getting funded. CIOs are finding the money by consolidating redundant vendors, and they're stealing from other pockets of budget, so, it's not surprising that cybersecurity remains the number one technology priority in 2023. We've been reporting that for quite some time now. It's specifically cloud, cloud native security container and API security. That's where all the action is, because there's still holes to plug from that forced march to digital that occurred during COVID. Cloud migration, kind of showing here on number two on this chart, still a high priority, while optimizing cloud spend is definitely a strategy that organizations are taking to cut costs. It's behind consolidating redundant vendors by a long shot. There's very little evidence that cloud repatriation, i.e., moving workloads back on prem is a major cost cutting trend. The data just doesn't show it. What is a trend is getting more real time with analytics, so, companies can do faster and more accurate customer targeting, and they're really prioritizing that, obviously, in this down economy. Real time, we sometimes lose it, what's real time? Real time, we sometimes define as before you lose the customer. Now in the hiring front, customers tell us they're still having a hard time finding qualified site reliability engineers, SREs, Kubernetes expertise, and deep analytics pros. These job markets remain very tight. Let's stay with security for just a moment. We said many times that, prior to COVID, zero trust was this undefined buzzword, and the joke, of course, is, if you ask three people, "What is zero trust?" You're going to get three different answers, but the truth is that virtually every security company that was resisting taking a position on zero trust in an attempt to avoid... They didn't want to get caught up in the buzzword vortex, but they're now really being forced to go there by CISOs, so, there are some good quotes here on cyber that we want to share that came out of the recent conversations that we cited up front. The first one, "Zero trust is the highest ROI, because it enables business transformation." In other words, if I can have good security, I can move fast, it's not a blocker anymore. Second quote here, "ZTA," zero trust architecture, "Is more than securing the perimeter. It encompasses strong authentication and multiple identity layers. It requires taking a software approach to security instead of a hardware focus." The next one, "I'd love to have a security data lake that I could apply to asset management, vulnerability management, incident management, incident response, and all aspects for my security team. I see huge promise in that space," and the last one, I see NLP, natural language processing, as the foundation for email security, so, instead of searching for IP addresses, you can now read emails at light speed and identify phishing threats, so, look at, this is a small snapshot of the mindset around security, but I'll add, when you talk to the likes of CrowdStrike, and Zscaler, and Okta, and Palo Alto Networks, and many other security firms, they're listening to these narratives around zero trust. I'm confident they're working hard on skating to this puck, if you will. A good example is this idea of a security data lake and using analytics to improve security. We're hearing a lot about that. We're hearing architectures, there's acquisitions in that regard, and so, that's becoming real, and there are many other examples, because data is at the heart of digital business. This is the next area that we want to talk about. It's obvious that data, as a topic, gets a lot of mind share amongst practitioners, but getting data right is still really hard. It's a challenge for most organizations to get ROI and expected return out of data. Most companies still put data at the periphery of their businesses. It's not at the core. Data lives within silos or different business units, different clouds, it's on-prem, and increasingly it's at the edge, and it seems like the problem is getting worse before it gets better, so, here are some instructive comments from our recent conversations. The first one, "We're publishing events onto Kafka, having those events be processed by Dataproc." Dataproc is a Google managed service to run Hadoop, and Spark, and Flank, and Presto, and a bunch of other open source tools. We're putting them into the appropriate storage models within Google, and then normalize the data into BigQuery, and only then can you take advantage of tools like ThoughtSpot, so, here's a company like ThoughtSpot, and they're all about simplifying data, democratizing data, but to get there, you have to go through some pretty complex processes, so, this is a good example. All right, another comment. "In order to use Google's AI tools, we have to put the data into BigQuery. They haven't integrated in the way AWS and Snowflake have with SageMaker. Moving the data is too expensive, time consuming, and risky," so, I'll just say this, sharing data is a killer super cloud use case, and firms like Snowflake are on top of it, but it's still not pretty across clouds, and Google's posture seems to be, "We're going to let our database product competitiveness drive the strategy first, and the ecosystem is going to take a backseat." Now, in a way, I get it, owning the database is critical, and Google doesn't want to capitulate on that front. Look, BigQuery is really good and competitive, but you can't help but roll your eyes when a CEO stands up, and look, I'm not calling out Thomas Kurian, every CEO does this, and talks about how important their customers are, and they'll do whatever is right by the customer, so, look, I'm telling you, I'm rolling my eyes on that. Now let me also comment, AWS has figured this out. They're killing it in database. If you take Redshift for example, it's still growing, as is Aurora, really fast growing services and other data stores, but AWS realizes it can make more money in the long-term partnering with the Snowflakes and Databricks of the world, and other ecosystem vendors versus sub optimizing their relationships with partners and customers in order to sell more of their own homegrown tools. I get it. It's hard not to feature your own product. IBM chose OS/2 over Windows, and tried for years to popularize it. It failed. Lotus, go back way back to Lotus 1, 2, and 3, they refused to run on Windows when it first came out. They were running on DEC VAX. Many of you young people in the United States have never even heard of DEC VAX. IBM wanted to run every everything only in its cloud, the same with Oracle, originally. VMware, as you might recall, tried to build its own cloud, but, eventually, when the market speaks and reveals what seems to be obvious to analysts, years before, the vendors come around, they face reality, and they stop wasting money, fighting a losing battle. "The trend is your friend," as the saying goes. All right, last pull quote on data, "The hardest part is transformations, moving traditional Informatica, Teradata, or Oracle infrastructure to something more modern and real time, and that's why people still run apps in COBOL. In IT, we rarely get rid of stuff, rather we add on another coat of paint until the wood rots out or the roof is going to cave in. All right, the last key finding we want to highlight is going to bring us back to the cloud repatriation myth. Followers of this program know it's a real sore spot with us. We've heard the stories about repatriation, we've read the thoughtful articles from VCs on the subject, we've been whispered to by vendors that you should investigate this trend. It's really happening, but the data simply doesn't support it. Here's the question that was posed to these practitioners. If you had unlimited budget and the economy miraculously flipped, what initiatives would you tackle first? Where would you really lean into? The first answer, "I'd rip out legacy on-prem infrastructure and move to the cloud even faster," so, the thing here is, look, maybe renting infrastructure is more expensive than owning, maybe, but if I can optimize my rental with better utilization, turn off compute, use things like serverless, get on a steeper and higher performance over time, and lower cost Silicon curve with things like Graviton, tap best of breed tools in AI, and other areas that make my business more competitive. Move faster, fail faster, experiment more quickly, and cheaply, what's that worth? Even the most hard-o CFOs understand the business benefits far outweigh the possible added cost per gigabyte, and, again, I stress "possible." Okay, other interesting comments from practitioners. "I'd hire 50 more data engineers and accelerate our real-time data capabilities to better target customers." Real-time is becoming a thing. AI is being injected into data and apps to make faster decisions, perhaps, with less or even no human involvement. That's on the rise. Next quote, "I'd like to focus on resolving the concerns around cloud data compliance," so, again, despite the risks of data being spread out in different clouds, organizations realize cloud is a given, and they want to find ways to make it work better, not move away from it. The same thing in the next one, "I would automate the data analytics pipeline and focus on a safer way to share data across the states without moving it," and, finally, "The way I'm addressing complexity is to standardize on a single cloud." MonoCloud is actually a thing. We're hearing this more and more. Yes, my company has multiple clouds, but in my group, we've standardized on a single cloud to simplify things, and this is a somewhat dangerous trend, because it's creating even more silos and it's an opportunity that needs to be addressed, and that's why we've been talking so much about supercloud is a cross-cloud, unifying, architectural framework, or, perhaps, it's a platform. In fact, that's a question that we will be exploring later this month at Supercloud2 live from our Palo Alto Studios. Is supercloud an architecture or is it a platform? And in this program, we're featuring technologists, analysts, practitioners to explore the intersection between data and cloud and the future of cloud computing, so, you don't want to miss this opportunity. Go to supercloud.world. You can register for free and participate in the event directly. All right, thanks for listening. That's a wrap. I'd like to thank Alex Myerson, who's on production and manages our podcast, Ken Schiffman as well, Kristen Martin and Cheryl Knight, they helped get the word out on social media, and in our newsletters, and Rob Hof is our editor-in-chief over at siliconangle.com. He does some great editing. Thank you, all. Remember, all these episodes are available as podcasts wherever you listen. All you've got to do is search "breaking analysis podcasts." I publish each week on wikibon.com and siliconangle.com where you can email me directly at david.vellante@siliconangle.com or DM me, @Dante, or comment on our LinkedIn posts. By all means, check out etr.ai. They get the best survey data in the enterprise tech business. We'll be doing our annual predictions post in a few weeks, once the data comes out from the January survey. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, everybody, and we'll see you next time on "Breaking Analysis." (upbeat music)

Published Date : Jan 7 2023

SUMMARY :

This is "Breaking Analysis" and the director of engineering

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

JassyPERSON

0.99+

Cheryl KnightPERSON

0.99+

Eric BradleyPERSON

0.99+

Rob HofPERSON

0.99+

OktaORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

ZscalerORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Thomas KurianPERSON

0.99+

6%QUANTITY

0.99+

IBMORGANIZATION

0.99+

2023DATE

0.99+

18,000QUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

10,000 employeesQUANTITY

0.99+

CrowdStrikeORGANIZATION

0.99+

JanuaryDATE

0.99+

2022DATE

0.99+

January 17thDATE

0.99+

BostonLOCATION

0.99+

Lotus 1TITLE

0.99+

2019DATE

0.99+

JuneDATE

0.99+

8%QUANTITY

0.99+

United StatesLOCATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

SnowflakesORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

LotusTITLE

0.99+

two factorsQUANTITY

0.99+

OracleORGANIZATION

0.99+

DataprocORGANIZATION

0.99+

three peopleQUANTITY

0.99+

last weekDATE

0.99+

Supercloud2EVENT

0.99+

TeradataORGANIZATION

0.99+

1%QUANTITY

0.99+

3TITLE

0.99+

WindowsTITLE

0.99+

5%QUANTITY

0.99+

3%QUANTITY

0.99+

BigQueryTITLE

0.99+

Second quoteQUANTITY

0.99+

4%QUANTITY

0.99+

DEC VAXTITLE

0.99+

ThanksgivingEVENT

0.98+

OS/2TITLE

0.98+

7%QUANTITY

0.98+

last yearDATE

0.98+

two primary sourcesQUANTITY

0.98+

each weekQUANTITY

0.98+

InformaticaORGANIZATION

0.98+

pandemicEVENT

0.98+

first oneQUANTITY

0.98+

siliconangle.comOTHER

0.97+

first answerQUANTITY

0.97+

2%QUANTITY

0.97+

around 800,000 employeesQUANTITY

0.97+

50 more data engineersQUANTITY

0.97+

zero trustQUANTITY

0.97+

SnowflakeORGANIZATION

0.96+

single cloudQUANTITY

0.96+

2TITLE

0.96+

todayDATE

0.95+

ETRORGANIZATION

0.95+

single cloudQUANTITY

0.95+

LinkedInORGANIZATION

0.94+

later this monthDATE

0.94+

HORSEMAN and HANLEY Fixed


 

(upbeat music) >> Hello everyone, welcome to this special Cube conversation. I'm John Furrier, host of theCube. We're here in Palo Alto. We've got some remote guests. Going to break down the Fortinet vulnerability, which was confirmed last week as a critical vulnerability that exposed a zero-day flaw for some of their key products, obviously, FortiOS and FortiProxy for remote attacks. So we're going to break this down. It's a real time vulnerability that happened is discovered in the industry. Horizon3.ai is one of the companies that was key in identifying this. And they have a product that helps companies detect and remediate and a bunch of other cool things you've heard on the cube here. We've got James Horseman, an exploit developer. Love the title. Got to got to say, I'm not going to lie. I like that one. And Zach Hanley, who's the chief attack engineer at Horizon3.ai. Gentlemen, first, thank you for joining the Cube conversation. >> Thank you. It's good to be here. >> Yeah, thank you so much for having us. >> So before we get into the whole Fortinet, this vulnerability that was exposed and how you guys are playing into this I just got to say I love the titles. Exploit developer, Chief Attack Engineers, you don't see that every day. Explain the titles Zach, let's start with you. Chief Attack Engineer, what do you do? >> Yeah, sure. So the gist of it is, is that there is a lot to do and the cybersecurity world. And we made up a new engineering title called Attack Engineer because there's so many different things an attacker will actually do over the course of attack. So we just named them an engineer. And I lead that team that helps develop the offensive capabilities for our product. >> Got it. James, you're the Exploit Developer, exploiting. What are you exploiting? What's going on there? >> So what I'll do in a day to day is we'll take N-days, which are vulnerabilities that have been disclosed to a vendor, but not yet publicly patched necessarily or a pocket exists for them. And I'll try to reverse engineer and find them, so we can integrate them into our product and our customers can use them to make sure that they're actually secure. And then if there's no interesting N-days to go after, we'll sometimes search for zero-days, which are vulnerabilities in products that the vendor doesn't yet know about. >> Yeah, and those are most critical. Those things can being really exploited and cause a lot of damage. Well James, thanks for coming on. We're here to talk about the vulnerability that happened with Fortinet and their products zero-day vulnerability. But first with the folks, for context, Horizon3.ai is a new startup rapidly growing. They've been on theCube. The CEOs, Snehal and team have described their product as an autonomous pen testing. But as part of that, they also have more of a different approach to testing environment. So they're constantly putting companies under pressure. Let's get into it. Let's get into this hack. So you guys are kind of like, I call it the early warning detection system. You're seeing things early because your product's constantly testing infrastructure. Okay? Over time, all the time always on. How did this come come about? How did you guys see this? What happened? Take us through. >> Yeah, sure. I'll start off. So on Friday, we saw on Twitter, which is actually a really good source of threat intelligence these days, We saw a person released details that 40 minutes sent advanced warning email that a critical vulnerability had been discovered and that an emergency patch was released. And the details that we saw, we saw that was an authentication bypass and we saw that it affected the 40 OS, 40 proxy and the 40 switch manager. And we knew right off the bat those are some of their most heavily used products. And for us to understand how this vulnerability worked and for us to actually help our clients and other people around the world understand it, we needed to get after it. So after that, James and I got on it, and then James can tell you what we did after we first heard. >> Yeah. Take us through play by play. >> Sure. So we saw it was a 9.8 CVSS, which means it's easy to exploit and low complexity and also kind of gives you the keys that take them. So we like to see those because they're easy to find, easy to go after. They're big wins. So as soon as we saw this come out we downloaded some firmware for 40 OS. And the first few hours were really about unpacking the firmware, seeing if we could even to get it run. We got it running a a VMware VMDK file. And then we started to unpack the firmware to see what we could find inside. And that was probably at least half of the time. There seemed to be maybe a little bit of obfuscation in the firmware. We were able to analyze the VDMK files and get them mounted and we saw that they were, their operating system was compressed. And when we went to decompress them we were getting some strange decompression errors, corruption errors. And we were kind of scratching our heads a little bit, like you know, "What's going on here?" "These look like they're legitimately compressed files." And after a while we noticed they had what seemed to be a different decompression tool than what we had on our systems also in that VMDK. And so we were able to get that running and decompress the firmware. And from there we were off to the races to dive deeper into the differences between the vulnerable firmware and the patch firmware. >> So the compressed files were hidden. They basically hid the compressed files. >> Yeah, we're not so sure if they were intentionally obfuscated or maybe it was just a really old version of that compression algorithm. It was the XZ compression tool. >> Got it. So what happens next? So take us through. So you discovered, you guys tested. What do you guys do next? How did this thing... I mean, I saw the news it hit heavily. You know, they updated, everyone updated their catalog for patching. So this kind of hangs out there. There's a time lag out there. What's the state of the security at that time? Say Friday, it breaks over the weekend, potentially a lot of attacks might have happened. >> Yeah, so they chose to release this emergency pre-warning on Friday, which is a terrible day because most people are probably already swamped with work or checking out for the weekend. And by Sunday, James and I had actually figured out the vulnerability. Well, to make the timeline a little shorter. But generally what we do between when we discover or hear news of the CV and when we actually pocket is there's a lot of what we call patch diffing. And that's when we take the patched version and the unpatched version and we run it through a tool that kind of shows us the differences. And those differences are really key insight into, "Hey, what was actually going on?" "How did this vulnerability happen?" So between Friday and Sunday, we were kind of scratching our heads and had some inspiration Sunday night and we actually figured it out. So Sunday night, we released news on Twitter that we had replicated the exploit. And the next day, Monday morning, finally, Fortinet actually released their PSIRT notice, where they actually announced to the world publicly that there was a vulnerability and here are the mitigation steps that you can take to mitigate the vulnerability if you cannot patch. And they also release some indicators of compromise but their indicators of compromise were very limited. And what we saw was a lot of people on social media, hey asking like, "These indicators of compromise aren't sufficient." "We can't tell if we've been compromised." "Can you please give us more information?" So because we already had the exploit, what we did was we exploited our test Fortinet devices in our lab and we collected our own indicators of compromise and we wrote those up and then released them on Tuesday, so that people would have a better indication to judge their environments if they've been already exploited in the wild by this issue. Which they also announced in their PSIRT that it was a zero-day being exploited in the wild It wasn't a security researcher that originally found the issue. >> So unpack the difference for the folks that don't know the difference between a zero-day versus a research note. >> Yeah, so a zero-day is essentially a vulnerability that is exploited and taken advantage of before it's made public. An N-day, where a security researcher may find something and report it, that and then once they announce the CVE, that's considered an N-day. So once it's known, it's an N-day and once if it's exploited before that, it's a zero-day. >> Yeah. And the difference is zero-day people can get in there and get into it. You guys saw it Friday on Twitter you move into action Fortinet goes public on Monday. The lag between those days is critical time. What was going on? Why are you guys doing this? Is this part of the autonomous pen testing product? Is this part of what you guys do? Why Horizon3.ai? Is this part of your business model? Or was this was one of those things where you guys just jumped on it? Take us through Friday to Monday. >> James, you want to take this one? >> Sure. So we want to hop on it because we want to be able to be the first to have a tool that we can use to exploit our customer system in a safe manner to prove that they're vulnerable, so then they can go and fix it. So the earlier that we have these tools to exploit the quicker our customers can patch and verify that they are no longer vulnerable. So that's the drive for us to go after these breaking exploits. So like I said, Friday we were able to get the firmware, get it decompressed. We actually got a test system up and running, familiarized ourself with the system a little bit. And we just started going through the patch. And one of the first things we noticed was in their API server, they had a a dip where they started including some extra HTTP headers when they proxied a connection to one of their backend servers. And there were, I believe, three headers. There was a HTTP forwarded header, a Vdom header, and a Cert header. And so we took those strings and we put them into our de-compiled version of the firmware to kind of start to pinpoint an area for us to look because this firmware is gigantic. There's tons of files to look at. And so having that patch is really critical to being able to quickly reverse engineer what they did to find the original exploit. So after we put those strings into our firmware, we found some interesting parts centered around authorization and authentication for these devices. And what we found was when you set a specific forwarded header, the system, for lack of better term, thought that you were on the inside. So a lot of these systems they'll have kind of, two methods of entry. One is through the front door, where if you come in you have to provide some credentials. They don't really trust you. You have to provide a cookie or some kind of session ID in order to be allowed to make requests. And the other side is kind of through the back door, where it looks like you are part of the system itself. So if you want to ask for a particular resource, if you look like you're part of the system they're not going to scrutinize you too much. They'll just let you do whatever you want to do. So really the nature of this exploit was we were able to manipulate some of those HTP headers to trick the system into thinking that we were coming in through the back door when we really coming in through the front. >> So take me through that that impact. That means remote execution. I can come in remotely and anonymous and act like I'm on the inside system. >> Yeah. >> And that's the case of the kingdom as you said earlier, right? >> Yeah. So the crux of the vulnerability is it allows you to make any kind of request you want to this system as if you were an administrator. So it lets you control the interfaces, set them up or down, lets you create packet captures, lets you add and remove users. And what we tried to do, which surprisingly the exploit didn't let us do was to create a new admin user. So there was some kind of extra code in there to stop somebody that did get that extra access to create an admin user. And so that kind of bummed us out. And so after we discovered the exploit we were kind of poking around to see what we could do with it, couldn't create an admin user. We were like, "Oh no, what are we going to do?" And eventually we came up with the idea to modify the existing administrator user. And that the exploit did allow us to do. So our initial POC, took some SSH keys adding them to an existing administrative user and then we were able to SSH in through the system. >> Awesome. Great, description. All right, so Zach, let's get to you for a second. So how does this happen? What does this... How did we get here? What was the motivation? If you're the chief attacker and you want to make this exploit happen, take me through what the other guy's thinking and what he did or she. >> Sure. So you mean from like the attacker's perspective, why are they doing this? >> Yeah. How'd this exploit happen? >> Yeah. >> And what was it motivated by? Was it a mistake? Was it intentional? >> Yeah, ultimately, like, I don't think any vendor purposefully creates vulnerabilities, but as you create a system and it builds and builds, it gets more complex and naturally logic bugs happen. And this was a logic bug. So there's no blame Fortinet for like, having this vulnerability and like, saying it's like, a back door. It just happens. You saw throughout this last year, F5 had a very similar vulnerability, VMware had a very similar vulnerability, all introducing authentication bypasses. So from the attacker's mindset, why they're actually going after this is a lot of these devices that Fortinet has, are on the edge of corporate networks and ransomware and whatever else. If you're a an APT, you want to get into organizations. You want to get from the outside to the inside. So these edge devices are super important and they're going to get a lot of eyes from attackers trying to figure out different ways to get into the system. And as you saw, this was in the wild exploited and that's how Fortinet became aware of it. So obviously there are some attackers out there doing this right now. >> Well, this highlights your guys' business model. I love what you guys do. I think it's a unique and needed approach. You take on the role of, I guess white hacker as... white hat hacker as a service. I don't know what to call it. You guys are constantly penetrating, testing, creating value for the customers to avoid in this case a product that's popular that just had the situation and needed to be resolved. And the hard part is how do you do it, right? So again, there's all these things are going on. This is the future of security where you need to have these, I won't say simulations, but constant kind of testing at scale. >> Yeah. >> I mean, you got the edge, it takes one little entry point to get into the network. It could be anywhere. >> Yeah, it definitely security, it has to be continuous these days. Because if you're only doing a pen test once a year or twice a year you have a year to six months of risk just building and building. And there's countless vulnerabilities and countless misconfigurations that can be introduced into a your network as the time goes on. >> Well, autonomous pen testing- >> Just because you're- >> ... is great. That's awesome stuff. I think it just frees up the talent in the organization to do other things and again, get on the real important stuff. >> Just because your network was secure yesterday doesn't mean it's going to be secure today. So in addition to your defense in depth and making sure that you have all the right configurations, you want to be continuously testing the security of your network to make sure that no new vulnerabilities have been introduced. >> And with the cloud native modern application environment we have now, hardware's got to keep up. More logic potential vulnerability could emerge. You just never know when that one N-vulnerability is going to be there. And so constantly looking out for is a really big deal. >> Definitely. Yeah, the switch to cloud and moving into hybrid cloud has introduced a lot more complexity in environments. And it's definitely another hole attackers going and after. >> All right. Well I got you guys here. I really appreciate the commentary on this vulnerability and this exploit opportunity that Fortinet had to move fast and you guys helped them and the customers. In general, as you guys see the security business now and the practitioners out there, there's a lot of pain points. What are the most powerful acute pain points that the security ops guys (laughing) are dealing with right now? Is it just the constant barrage of attacks? What's the real pain right now? >> I think it really matters on the organization. I think if you're looking at it from a in the news level, where you're constantly seeing all these security products being offered. The reality is, is that the majority of companies in the US actually don't have a security staff. They maybe have an IT guy, just one and he's not a security guy. So he's having to manage helping his company have the resources he needs, but also then he's overwhelmed with all the security things that are happening in the world. So I think really time and resources are the pain points right now. >> Awesome. James, any comment? >> Yeah, just to add to what Zach said, these IT guys they're put under pressure. These Fortinet devices, they could be used in a company that just recently transitioned to a lot of work from home because of COVID and whatnot. And they put these devices online and now they're under pressure to keep them up to date, keep them configured and keep them patched. But anytime you make a change to a system, there's a risk that it goes down. And if the employees can't VPN or log in from home anymore, then they can't work. The company can't make money. So it's really a balancing act for that IT guy to make sure that his environment is up to date, while also making sure it's not taken down for any reason. So it's a challenging position to be in and prioritizing what you need to fix and when is definitely a difficult problem. >> Well, this is a great example, this news article and this. Fortinet news highlights the Horizon3.ai advantage and what you guys do. I think this is going to be the table stakes for security in the industry as people have to build their own, I call it the militia. You got to have your own testing. (laughing) You got to have your own way to help protect yourself. And one of them is to know what's going on all the time every day, today and tomorrow. So congratulations and thanks for sharing the exploit here on this zero-day flaw that was exposed. Thanks for for coming on. >> Yeah, thanks for having us. >> Thank you. >> Okay. This is theCube here in Palo Alto, California. I'm John Furrier. You're watching security update, security news, breaking down the exploit, the zero-day flaw that was exploited at least one attack that was documented. Fortinet devices now identified and patched. This is theCube. Thanks for watching. (upbeat music)

Published Date : Oct 14 2022

SUMMARY :

Horizon3.ai is one of the companies It's good to be here. and how you guys are playing into this So the gist of it is, is that What are you exploiting? that the vendor doesn't yet know about. I call it the early And the details that we saw, And the first few hours were really about So the compressed files were hidden. of that compression algorithm. I mean, I saw the news and here are the mitigation steps for the folks that don't that and then once they announce the CVE, And the difference is zero-day And one of the first things we noticed was and act like I'm on the inside system. And that the exploit did allow us to do. let's get to you for a second. So you mean from like the How'd this exploit happen? So from the attacker's mindset, And the hard part is to get into the network. it has to be continuous these days. get on the real important stuff. and making sure that you have is going to be there. Yeah, the switch to cloud and the practitioners out there, The reality is, is that the James, any comment? And if the employees can't VPN and what you guys do. the zero-day flaw that was exploited

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Zach HanleyPERSON

0.99+

JamesPERSON

0.99+

James HorsemanPERSON

0.99+

FortinetORGANIZATION

0.99+

John FurrierPERSON

0.99+

ZachPERSON

0.99+

Palo AltoLOCATION

0.99+

TuesdayDATE

0.99+

FridayDATE

0.99+

MondayDATE

0.99+

Sunday nightDATE

0.99+

six monthsQUANTITY

0.99+

USLOCATION

0.99+

last weekDATE

0.99+

SundayDATE

0.99+

HANLEYPERSON

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

SnehalPERSON

0.99+

Monday morningDATE

0.99+

oneQUANTITY

0.99+

tomorrowDATE

0.99+

40 minutesQUANTITY

0.99+

yesterdayDATE

0.99+

last yearDATE

0.99+

firstQUANTITY

0.99+

todayDATE

0.98+

Horizon3.aiORGANIZATION

0.98+

OneQUANTITY

0.98+

three headersQUANTITY

0.98+

two methodsQUANTITY

0.97+

next dayDATE

0.97+

HORSEMANPERSON

0.97+

once a yearQUANTITY

0.96+

a yearQUANTITY

0.96+

twice a yearQUANTITY

0.96+

40 OSQUANTITY

0.95+

tons of filesQUANTITY

0.94+

zeroQUANTITY

0.93+

first thingsQUANTITY

0.91+

VMwareORGANIZATION

0.9+

TwitterORGANIZATION

0.89+

VMDKTITLE

0.88+

zero-dayQUANTITY

0.85+

Horizon3.aiTITLE

0.84+

COVIDOTHER

0.83+

first few hoursQUANTITY

0.79+

Attack EngineerTITLE

0.76+

daysQUANTITY

0.76+

one little entry pointQUANTITY

0.72+

F5TITLE

0.71+

one attackQUANTITY

0.71+

FortiProxyTITLE

0.7+

CubeORGANIZATION

0.62+

CubeCOMMERCIAL_ITEM

0.62+

VMwareTITLE

0.58+

Mahesh Nagarathnam, Dell Technologies


 

>>We're back with a blueprint for trusted infrastructure and partnership with Dell Technologies in the cube. And we're here with Mahesh Nager, who is a consultant in the area of networking product management at Dell Technologies. Mahesh, welcome. Good to see you. >>Hey, good morning Davis. Nice to meet, Meet to you as well. >>Hey, so we've been digging into all the parts of the infrastructure stack and now we're gonna look at the all important networking components. Mahesh, when we think about networking in today's environment, we think about the core data center and we're connecting out to various locations including the cloud and both the near and the far edge. So the question is from Dell's perspective, what's unique and challenging about securing network infrastructure that we should know about? >>Yeah, so a few years ago IT security and an enterprise was primarily putting a wrapper around the data center because it was constrained to an infrastructure owned and operated by the enterprise for the most part. So putting a rapid around it like a parameter or a firewall was a sufficient response because you could basically control the one small enough control today with the distributed data, intelligent software, different systems, multi-cloud onement and asset service delivery, you know, the infrastructure for the modern era changes the way to secure the network infrastructure. In today's, you know, data driven world, it operates everywhere. And that has created and accessed everywhere so far from, you know, the centralized mono data centers of the past. The biggest challenge is how do we build the network infrastructure of the modern era that are intelligent with automation, enabling maximum flexibility and business agility without any compromise on the security. We believe that in this data era, the security transformation must accompany digital transformation. >>Yeah, that's very good. You talked about a couple of things there. Data by its very nature is distributed. There is no perimeter anymore, so you can't just, as you say, put a wrap around it. I like the way you phrase that. So when you think about cyber security resilience from a networking perspective, how do you define that? In other words, what are the basic principles that you adhere to when thinking about securing network infrastructure for your customers? >>So our belief is that cybersecurity and cybersecurity resilience, they need to be holistic. They need to be integrated, scalable, one that spans the entire enterprise and with a consistent and objective and policy implementation. So cybersecurity needs to span across all the devices and running across any application, whether the application resets on the cloud or anywhere else in the infrastructure. From a networking standpoint, what does it mean? It's again, the same principles, right? You know, in order to prevent the threat actors from accessing, changing, destroying, or stealing sensitive data, this definition holds good for networking as well. So if you look at it from a networking perspective, it's the ability to protect from and withstand attacks on the networking systems as we continue to evolve. This will also also include the ability to adapt and recover from these attacks, which is what cyber resilience aspect is all about. So cybersecurity best practices, as you know, is continuously changing the landscape, primarily because the cyber threats also continue to evolve. >>Yeah, got it. So I like that. So it's gotta be integrated, it's gotta be scalable, it's gotta be comprehensive, comprehensive and adaptable. You're saying it can't be static, >>Right? Right. So I think, you know, you had a second part of a question, you know, that says what do we, you know, what are the basic principles? You know, when you're thinking about securing network infrastructure, when you are looking at securing the network infrastructure, it revolves around core security capability of the devices that form the network. And what are these security capabilities? These are access control, software integrity and vulnerability response. When you look at access control, it's to ensure that only the authenticated users are able to access the platform and they're able to access only the kind of the assets that they're authorized to based on their user level. Now accessing a network platform like a switch or a rotor for example, is typically used for say, configuration and management of the networking switch. So user access is based on say, rules for that metal in a role based access control, whether you are security admin or a network admin or a storage admin. >>And it's imperative that logging is enabled because any of the change to the configuration is actually logged and monitored as well. We talking about software's integrity, it's the ability to ensure that the software that's running on the system has not been compromised. And, and you know, this is important because it could actually, you know, get hold of the system and you know, you could get und desired results in terms of, say validation of the images. It's, it needs to be done through in digital signature. So, so it's important that when you're talking about say, software integrity, A, you are ensuring that the platform is not compromised, you know, is not compromised, and B, that any upgrades, you know, that happens to the platform is happening through validated signature. >>Okay. And now, now you've now, so there's access control, software integrity, and I think you, you've got a third element which is i, I think response, but please continue. >>Yeah, so you know, the third myth about civil notability. So we follow the same process that's been followed by the rest of the products within the Dell product family. That's to report or identify, you know, any kind of a vulnerability that's being addressed by the Dell product security incident response team. So the networking portfolio is no different. You know, it follows the same process for identification for tri and for resolution of these vulnerabilities. And this are addressed either through patches or through new reasons via networking software. >>Yeah, got it. Okay. So I mean, you didn't say zero trust, but when you were talking about access control, you're really talking about access to only those assets that people are authorized to access. I know zero trust sometimes is a buzzword, but, but you I think gave it, you know, some clarity there. Software integrity, it's about assurance validation, your digital signature you mentioned and, and that there's been no compromise. And then how you respond to incidents in a standard way that can fit into a security framework. So outstanding description, thank you for that. But then the next question is, how does Dell networking fit into the construct of what we've been talking about Dell trusted infrastructure? >>Okay, so networking is the key element in the Dell trusted infrastructure. It prides the interconnect between the service and the storage world. And you know, it's part of any data center configuration for a trusted infrastructure. The network needs to have access control in place where only the authorized nels are able to make change to the network configuration and logging of any of those changes is also done through the logging capabilities. Additionally, we should also ensure that the configuration should provide network isolation between say the management network and the data traffic network because they need to be separate and distinct from each other. And furthermore, even if you look at the data traffic network and now you have things like say segmentation isolated segments, I know via vrs or, or some micro segmentation via partners, this allows various level of security for each of those segments. >>So it's important, you know, that, that the network infrastructure has the ability, you know, to provide all this, this services from a Dell networking security perspective, right? You know, there are multiple layers of defense, you know, both at the edge and in the network, in the hardware and in the software and essentially, you know, a set of rules and a configuration that's designed to sort of protect the integrity, confidentiality, and accessibility of the network assets. So each network security layer, it implements policies and controls as I said, you know, including send network segmentation. We do have capabilities sources, centralized management automation and capability and scalability for that matter. Now you add all of these things, you know, with the open networking standards or software, different principles and you essentially, you know, reach to the point where you know, you're looking at zero trust network access, which is essentially sort of a building block for increased cloud adoption. >>If you look at say that you know the different pillars of a zero touch architecture, you know, if you look at the device aspect, you know, we do have support for security for example, we do have say trusted platform in a trusted platform models tpms on certain offer products and you know, the physical security know, plain, simple old one lab port enabled from a user trust perspective, we know it's all done via access control days via role based access control and say capability in order to provide say remote authentication or things like say sticky Mac or Mac learning limit and so on. If you look at say a transport and a session trust layer, these are essentially, you know, how do you access, you know, this switch, you know, is it by plain or telenet or is it like secure ssh, right? And you know, when a host communicates, you know, to the switch, we do have things like self-signed or a certificate authority based certification. >>And one of the important aspect is, you know, in terms of, you know, the routing protocol, the routing protocol, say for example BGP for example, we do have the capability to support MD five authentication between the VGP peers so that there is no, you know, manages attack, you know, to the network where the routing table is compromised. And the other aspect is about second control plane is here in now, you know, it's, it's typical that if you don't have a contra plane here, you know, it could be flooded and you know, you know, the switch could be compromised by city denial service attacks. From an application trust perspective, as I mentioned, you know, we do have, you know, the application specific security rules where you could actually define, you know, the specific security rules based on the specific applications, you know, that are running within the system. >>And I did talk about, say the digital signature and the cryptographic checks and that we do for authentication and for, I mean rather for the authenticity and the validation of, you know, of the image and the BS and so on and so forth. Finally, you know, the data trust, we are looking at, you know, the network separation, you know, the network separation could happen or VRF plain old wheel Ls, you know, which can bring about say multitenancy aspects. We talk about some microsegmentation as it applies to nsx for example. The other aspect is, you know, we do have, with our own smart fabric services that's enabled in a fabric, we have a concept of c cluster security. So all of this, you know, the different pillars, they sort of make up for the zero trust infrastructure for the networking assets of an infrastructure. >>Yeah. So thank you for that. There's a, there's a lot to unpack there. You know, one of the premise, the premise really of this, this, this, this segment that we're setting up in this series is really that everything you just mentioned, or a lot of things you just mentioned used to be the responsibility of the security team. And, and the premise that we're putting forth is that because security teams are so stretched thin, you, you gotta shift a vendor community. Dell specifically is shifting a lot of those tasks to their own r and d and taking care of a lot of that. So, cuz sec op teams got a lot of other stuff to, to worry about. So my question relates to things like automation, which can help and scalability, what about those topics as it relates to networking infrastructure? >>Okay, our portfolio, >>It enables state of the automation software, you know, that enables simplifying of the design. So for example, we do have, you know, you know the fabric design center, you know, a tool that automates the design of the anti fabric and you know, from a deployment and you know, the management of the network infrastructure, there are simplicities, you know, using, you know, like Ansible s for Sonic for example, are, you know, for a better or settle and tell story. You know, we do have smart fabric services that can automate the entire fabric, you know, for a storage solution or for, you know, for one of the workloads for example. Now we do help reduce the complexity by closely integrating the management of the physical and the virtual networking infrastructure. And again, you know, we have those capabilities using Sonic or Smart Traffic services. If you look at Sonic for example, right? >>It delivers automated intent based secure containerized network and it has the ability to provide some network visibility and awareness and, and all of these things are actually valid, you know, for a modern networking infrastructure. So now if you look at Sonic, you know, it's, you know, the usage of those tools, you know, that are available, you know, within the Sonic NAS is not restricted, you know, just to the data center infrastructure is, it's a unified no, you know, that's well applicable beyond the data center. You now right up to the edge. Now if you look at our north from a smart traffic voice 10 perspective, you know, as I mentioned, we do have smart fabric services which essentially, you know, simplifies the deployment day zero. I mean rather day one, day two deployment expansion plans and the life cycle management of our conversion infrastructure and hyper and hyperconverge infrastructure solutions. And finally, in order to enable say, zero touch deployment, we do have, you know, a VP solution with our SD van capability. So these are, you know, ways by which we bring down the complexity by, you know, enhancing the automation capability using, you know, a singular loss that can expand from a data center now right to the edge. >>Great, thank you for that. Last question real quick pitch me, can you summarize from your point of view, what's the strength of the Dell networking portfolio? >>Okay, so from a Dell networking portfolio, we support capabilities at multiple layers. As I mentioned. We've talking about the physical security, for example, let's say disabling of the unused interface. Sticky Mac and trusted platform modules are the things that to go after. And when you're talking about say secure boot for example, it delivers the authenticity and the integrity of the OS 10 images at the startup. And Secure Boot also protects the startup configuration so that, you know, the startup configuration file is not compromised. And Secure port also enables the workload of prediction, for example, that is at another aspect of software image integrity validation, you know, wherein the image is validated for the digital signature in know prior to any upgrade process. And if you are looking at secure access control, we do have things like role-based access control, SSH to the switches, control plane access control that pretty do attacks and say access control from multifactor authentication. >>We do have various tech hacks for entry control to the network and things like CSAC and P IV support, you know, from a federal perspective, we do have, say logging wherein, you know, any event, any auditing capabilities can be possible by say, looking at the clog service, you know, which are pretty much in our transmitter from the devices overts for example, and last we talked about say networks, you know, say network separation and you know, these, you know, separation, you know, ensures that that is, you know, a contained say segment, you know, for a specific purpose or for the specific zone. And you know, this can be implemented by a, the micro segmentation, you know, just a plain old wheel are using virtual route of framework vr, for example. >>A lot there. I mean, I think, frankly, you know, my takeaway is you guys do the heavy lifting in a very complicated topic. So thank you so much for, for coming on the cube and explaining that in, in quite some depth. Really appreciate it. >>Thank you indeed. >>Oh, you're very welcome. Okay, in a moment I'll be back to dig into the hyper-converged infrastructure part of the portfolio and look at how when you enter the world of software defined where you're controlling servers and storage and networks via software led system, you can be sure that your infrastructure is trusted and secure. You're watching a blueprint for trusted infrastructure made possible by Dell Technologies and collaboration with the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Oct 4 2022

SUMMARY :

Good to see you. Nice to meet, Meet to you as well. So the question is from Dell's perspective, what's unique and and asset service delivery, you know, the infrastructure for the modern era changes the I like the way you phrase that. best practices, as you know, is continuously changing the landscape, So I like that. that says what do we, you know, what are the basic principles? you know, is not compromised, and B, that any upgrades, you know, and I think you, you've got a third element which is i, I think response, Yeah, so you know, the third myth about civil notability. And then how you respond to incidents in a standard way And you know, you know, reach to the point where you know, you're looking at zero trust network access, And you know, when a host communicates, you know, to the switch, we do have things like And one of the important aspect is, you know, in terms of, you know, the routing protocol, Finally, you know, the data trust, we are looking at, you know, the network separation, really that everything you just mentioned, or a lot of things you just mentioned used to be the responsibility So for example, we do have, you know, you by, you know, enhancing the automation capability using, you know, Great, thank you for that. so that, you know, the startup configuration file is not compromised. And you know, this can be implemented by a, the micro segmentation, you know, I mean, I think, frankly, you know, my takeaway is you of the portfolio and look at how when you enter the world of software defined where you're controlling

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaheshPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

Mahesh NagerPERSON

0.99+

DellORGANIZATION

0.99+

Mahesh NagarathnamPERSON

0.99+

MacCOMMERCIAL_ITEM

0.99+

DavisPERSON

0.99+

second partQUANTITY

0.98+

bothQUANTITY

0.97+

todayDATE

0.97+

thirdQUANTITY

0.97+

second control planeQUANTITY

0.97+

third elementQUANTITY

0.95+

zero trustQUANTITY

0.95+

oneQUANTITY

0.95+

SonicORGANIZATION

0.93+

each networkQUANTITY

0.92+

eachQUANTITY

0.92+

OS 10TITLE

0.91+

zeroQUANTITY

0.9+

Ansible sORGANIZATION

0.85+

few years agoDATE

0.84+

10QUANTITY

0.84+

twoQUANTITY

0.79+

CubeORGANIZATION

0.73+

SmartORGANIZATION

0.71+

SonicTITLE

0.63+

day zeroQUANTITY

0.6+

dayQUANTITY

0.6+

NASCOMMERCIAL_ITEM

0.4+

Blueprint for Trusted Insfrastructure Episode 2 Full Episode 10-4 V2


 

>>The cybersecurity landscape continues to be one characterized by a series of point tools designed to do a very specific job, often pretty well, but the mosaic of tooling is grown over the years causing complexity in driving up costs and increasing exposures. So the game of Whackamole continues. Moreover, the way organizations approach security is changing quite dramatically. The cloud, while offering so many advantages, has also created new complexities. The shared responsibility model redefines what the cloud provider secures, for example, the S three bucket and what the customer is responsible for eg properly configuring the bucket. You know, this is all well and good, but because virtually no organization of any size can go all in on a single cloud, that shared responsibility model now spans multiple clouds and with different protocols. Now that of course includes on-prem and edge deployments, making things even more complex. Moreover, the DevOps team is being asked to be the point of execution to implement many aspects of an organization's security strategy. >>This extends to securing the runtime, the platform, and even now containers which can end up anywhere. There's a real need for consolidation in the security industry, and that's part of the answer. We've seen this both in terms of mergers and acquisitions as well as platform plays that cover more and more ground. But the diversity of alternatives and infrastructure implementations continues to boggle the mind with more and more entry points for the attackers. This includes sophisticated supply chain attacks that make it even more difficult to understand how to secure components of a system and how secure those components actually are. The number one challenge CISOs face in today's complex world is lack of talent to address these challenges. And I'm not saying that SecOps pros are not talented, They are. There just aren't enough of them to go around and the adversary is also talented and very creative, and there are more and more of them every day. >>Now, one of the very important roles that a technology vendor can play is to take mundane infrastructure security tasks off the plates of SEC off teams. Specifically we're talking about shifting much of the heavy lifting around securing servers, storage, networking, and other infrastructure and their components onto the technology vendor via r and d and other best practices like supply chain management. And that's what we're here to talk about. Welcome to the second part in our series, A Blueprint for Trusted Infrastructure Made Possible by Dell Technologies and produced by the Cube. My name is Dave Ante and I'm your host now. Previously we looked at what trusted infrastructure means and the role that storage and data protection play in the equation. In this part two of the series, we explore the changing nature of technology infrastructure, how the industry generally in Dell specifically, are adapting to these changes and what is being done to proactively address threats that are increasingly stressing security teams. >>Now today, we continue the discussion and look more deeply into servers networking and hyper-converged infrastructure to better understand the critical aspects of how one company Dell is securing these elements so that dev sec op teams can focus on the myriad new attack vectors and challenges that they faced. First up is Deepak rang Garage Power Edge security product manager at Dell Technologies. And after that we're gonna bring on Mahesh Nagar oim, who was consultant in the networking product management area at Dell. And finally, we're close with Jerome West, who is the product management security lead for HCI hyperconverged infrastructure and converged infrastructure at Dell. Thanks for joining us today. We're thrilled to have you here and hope you enjoy the program. Deepak Arage shoes powered security product manager at Dell Technologies. Deepak, great to have you on the program. Thank you. >>Thank you for having me. >>So we're going through the infrastructure stack and in part one of this series we looked at the landscape overall and how cyber has changed and specifically how Dell thinks about data protection in, in security in a manner that both secures infrastructure and minimizes organizational friction. We also hit on the storage part of the portfolio. So now we want to dig into servers. So my first question is, what are the critical aspects of securing server infrastructure that our audience should be aware of? >>Sure. So if you look at compute in general, right, it has rapidly evolved over the past couple of years, especially with trends toward software defined data centers and with also organizations having to deal with hybrid environments where they have private clouds, public cloud locations, remote offices, and also remote workers. So on top of this, there's also an increase in the complexity of the supply chain itself, right? There are companies who are dealing with hundreds of suppliers as part of their supply chain. So all of this complexity provides a lot of opportunity for attackers because it's expanding the threat surface of what can be attacked, and attacks are becoming more frequent, more severe and more sophisticated. And this has also triggered around in the regulatory and mandates around the security needs. >>And these regulations are not just in the government sector, right? So it extends to critical infrastructure and eventually it also get into the private sector. In addition to this, organizations are also looking at their own internal compliance mandates. And this could be based on the industry in which they're operating in, or it could be their own security postures. And this is the landscape in which servers they're operating today. And given that servers are the foundational blocks of the data center, it becomes extremely important to protect them. And given how complex the modern server platforms are, it's also extremely difficult and it takes a lot of effort. And this means protecting everything from the supply chain to the manufacturing and then eventually the assuring the hardware and software integrity of the platforms and also the operations. And there are very few companies that go to the lens that Dell does in order to secure the server. We truly believe in the notion and the security mentality that, you know, security should enable our customers to go focus on their business and proactively innovate on their business and it should not be a burden to them. And we heavily invest to make that possible for our customers. >>So this is really important because the premise that I set up at the beginning of this was really that I, as of security pro, I'm not a security pro, but if I were, I wouldn't want to be doing all this infrastructure stuff because I now have all these new things I gotta deal with. I want a company like Dell who has the resources to build that security in to deal with the supply chain to ensure the providence, et cetera. So I'm glad you you, you hit on that, but so given what you just said, what does cybersecurity resilience mean from a server perspective? For example, are there specific principles that Dell adheres to that are non-negotiable? Let's say, how does Dell ensure that its customers can trust your server infrastructure? >>Yeah, like when, when it comes to security at Dell, right? It's ingrained in our product, so that's the best way to put it. And security is nonnegotiable, right? It's never an afterthought where we come up with a design and then later on figure out how to go make it secure, right? Our security development life cycle, the products are being designed to counter these threats right from the big. And in addition to that, we are also testing and evaluating these products continuously to identify vulnerabilities. We also have external third party audits which supplement this process. And in addition to this, Dell makes the commitment that we will rapidly respond to any mitigations and vulnerability, any vulnerabilities and exposures found out in the field and provide mitigations and patches for in attacking manner. So this security principle is also built into our server life cycle, right? Every phase of it. >>So we want our products to provide cutting edge capabilities when it comes to security. So as part of that, we are constantly evaluating what our security model is done. We are building on it and continuously improving it. So till a few years ago, our model was primarily based on the N framework of protect, detect and rigor. And it's still aligns really well to that framework, but over the past couple of years, we have seen how computers evolved, how the threads have evolved, and we have also seen the regulatory trends and we recognize the fact that the best security strategy for the modern world is a zero trust approach. And so now when we are building our infrastructure and tools and offerings for customers, first and foremost, they're cyber resilient, right? What we mean by that is they're capable of anticipating threats, withstanding attacks and rapidly recurring from attacks and also adapting to the adverse conditions in which they're deployed. The process of designing these capabilities and identifying these capabilities however, is done through the zero press framework. And that's very important because now we are also anticipating how our customers will end up using these capabilities at there and to enable their own zero trust IT environments and IT zero trusts deployments. We have completely adapted our security approach to make it easier for customers to work with us no matter where they are in their journey towards zero trust option. >>So thank you for that. You mentioned the, this framework, you talked about zero trust. When I think about n I think as well about layered approaches. And when I think about zero trust, I think about if you, if you don't have access to it, you're not getting access, you've gotta earn that, that access and you've got layers and then you still assume that bad guys are gonna get in. So you've gotta detect that and you've gotta response. So server infrastructure security is so fundamental. So my question is, what is Dell providing specifically to, for example, detect anomalies and breaches from unauthorized activity? How do you enable fast and easy or facile recovery from malicious incidents, >>Right? What is that is exactly right, right? Breachers are bound to happen and given how complex our current environment is, it's extremely distributed and extremely connected, right? Data and users are no longer contained with an offices where we can set up a perimeter firewall and say, Yeah, everything within that is good. We can trust everything within it. That's no longer true. The best approach to protect data and infrastructure in the current world is to use a zero trust approach, which uses the principles. Nothing is ever trusted, right? Nothing is trusted implicitly. You're constantly verifying every single user, every single device, and every single access in your system at every single level of your ID environment. And this is the principles that we use on power Edge, right? But with an increased focus on providing granular controls and checks based on the principles of these privileged access. >>So the idea is that service first and foremost need to make sure that the threats never enter and they're rejected at the point of entry, but we recognize breaches are going to occur and if they do, they need to be minimized such that the sphere of damage cost by attacker is minimized so they're not able to move from one part of the network to something else laterally or escalate their privileges and cause more damage, right? So the impact radius for instance, has to be radius. And this is done through features like automated detection capabilities and automation, automated remediation capabilities. So some examples are as part of our end to end boot resilience process, we have what they call a system lockdown, right? We can lock down the configuration of the system and lock on the form versions and all changes to the system. And we have capabilities which automatically detect any drift from that lockdown configuration and we can figure out if the drift was caused to authorized changes or unauthorized changes. >>And if it is an unauthorize change can log it, generate security alerts, and we even have capabilities to automatically roll the firm where, and always versions back to a known good version and also the configurations, right? And this becomes extremely important because as part of zero trust, we need to respond to these things at machine speed and we cannot do it at a human speed. And having these automated capabilities is a big deal when achieving that zero trust strategy. And in addition to this, we also have chassis inclusion detection where if the chassis, the box, the several box is opened up, it logs alerts, and you can figure out even later if there's an AC power cycle, you can go look at the logs to see that the box is opened up and figure out if there was a, like a known authorized access or some malicious actor opening and chain something in your system. >>Great, thank you for that lot. Lot of detail and and appreciate that. I want to go somewhere else now cuz Dell has a renowned supply chain reputation. So what about securing the, the supply chain and the server bill of materials? What does Dell specifically do to track the providence of components it uses in its systems so that when the systems arrive, a customer can be a hundred percent certain that that system hasn't been compromised, >>Right? And we've talked about how complex the modern supply chain is, right? And that's no different for service. We have hundreds of confidence on the server and a lot of these form where in order to be configured and run and this former competence could be coming from third parties suppliers. So now the complexity that we are dealing with like was the end to end approach and that's where Dell pays a lot of attention into assuring the security approach approaching and it starts all the way from sourcing competence, right? And then through the design and then even the manufacturing process where we are wetting the personnel leather factories and wetting the factories itself. And the factories also have physical controls, physical security controls built into them and even shipping, right? We have GPS tagging of packages. So all of this is built to ensure supply chain security. >>But a critical aspect of this is also making sure that the systems which are built in the factories are delivered to the customers without any changes or any tapper. And we have a feature called the secure component verification, which is capable of doing this. What the feature does this, when the system gets built in a factory, it generates an inventory of all the competence in the system and it creates a cryptographic certificate based on the signatures presented to this by the competence. And this certificate is stored separately and sent to the customers separately from the system itself. So once the customers receive the system at their end, they can run out to, it generates an inventory of the competence on the system at their end and then compare it to the golden certificate to make sure nothing was changed. And if any changes are detected, we can figure out if there's an authorized change or unauthorize change. >>Again, authorized changes could be like, you know, upgrades to the drives or memory and ized changes could be any sort of temper. So that's the supply chain aspect of it and bill of metal use is also an important aspect to galing security, right? And we provide a software bill of materials, which is basically a list of ingredients of all the software pieces in the platform. So what it allows our customers to do is quickly take a look at all the different pieces and compare it to the vulnerability database and see if any of the vulner which have been discovered out in the wild affected platform. So that's a quick way of figuring out if the platform has any known vulnerabilities and it has not been patched. >>Excellent. That's really good. My last question is, I wonder if you, you know, give us the sort of summary from your perspective, what are the key strengths of Dell server portfolio from a security standpoint? I'm really interested in, you know, the uniqueness and the strong suit that Dell brings to the table, >>Right? Yeah. We have talked enough about the complexity of the environment and how zero risk is necessary for the modern ID environment, right? And this is integral to Dell powered service. And as part of that like you know, security starts with the supply chain. We already talked about the second component verification, which is a beneath feature that Dell platforms have. And on top of it we also have a silicon place platform mode of trust. So this is a key which is programmed into the silicon on the black service during manufacturing and can never be changed after. And this immutable key is what forms the anchor for creating the chain of trust that is used to verify everything in the platform from the hardware and software integrity to the boot, all pieces of it, right? In addition to that, we also have a host of data protection features. >>Whether it is protecting data at risk in news or inflight, we have self encrypting drives which provides scalable and flexible encryption options. And this couple with external key management provides really good protection for your data address. External key management is important because you know, somebody could physically steam the server walk away, but then the keys are not stored on the server, it stood separately. So that provides your action layer of security. And we also have dual layer encryption where you can compliment the hardware encryption on the secure encrypted drives with software level encryption. Inion to this we have identity and access management features like multifactor authentication, single sign on roles, scope and time based access controls, all of which are critical to enable that granular control and checks for zero trust approach. So I would say like, you know, if you look at the Dell feature set, it's pretty comprehensive and we also have the flexibility built in to meet the needs of all customers no matter where they fall in the spectrum of, you know, risk tolerance and security sensitivity. And we also have the capabilities to meet all the regulatory requirements and compliance requirements. So in a nutshell, I would say that you know, Dell Power Service cyber resident infrastructure helps accelerate zero tested option for customers. >>Got it. So you've really thought this through all the various things that that you would do to sort of make sure that your server infrastructure is secure, not compromised, that your supply chain is secure so that your customers can focus on some of the other things that they have to worry about, which are numerous. Thanks Deepak, appreciate you coming on the cube and participating in the program. >>Thank you for having >>You're welcome. In a moment I'll be back to dig into the networking portion of the infrastructure. Stay with us for more coverage of a blueprint for trusted infrastructure and collaboration with Dell Technologies on the cube, your leader in enterprise and emerging tech coverage. We're back with a blueprint for trusted infrastructure and partnership with Dell Technologies in the cube. And we're here with Mahesh Nager, who is a consultant in the area of networking product management at Dell Technologies. Mahesh, welcome, good to see you. >>Hey, good morning Dell's, nice to meet, meet to you as well. >>Hey, so we've been digging into all the parts of the infrastructure stack and now we're gonna look at the all important networking components. Mahesh, when we think about networking in today's environment, we think about the core data center and we're connecting out to various locations including the cloud and both the near and the far edge. So the question is from Dell's perspective, what's unique and challenging about securing network infrastructure that we should know about? >>Yeah, so few years ago IT security and an enterprise was primarily putting a wrapper around data center out because it was constrained to an infrastructure owned and operated by the enterprise for the most part. So putting a rapid around it like a parameter or a firewall was a sufficient response because you could basically control the environment and data small enough control today with the distributed data, intelligent software, different systems, multi-cloud environment and asset service delivery, you know, the infrastructure for the modern era changes the way to secure the network infrastructure In today's, you know, data driven world, it operates everywhere and data has created and accessed everywhere so far from, you know, the centralized monolithic data centers of the past. The biggest challenge is how do we build the network infrastructure of the modern era that are intelligent with automation enabling maximum flexibility and business agility without any compromise on the security. We believe that in this data era, the security transformation must accompany digital transformation. >>Yeah, that's very good. You talked about a couple of things there. Data by its very nature is distributed. There is no perimeter anymore, so you can't just, as you say, put a rapper around it. I like the way you phrase that. So when you think about cyber security resilience from a networking perspective, how do you define that? In other words, what are the basic principles that you adhere to when thinking about securing network infrastructure for your customers? >>So our belief is that cybersecurity and cybersecurity resilience, they need to be holistic, they need to be integrated, scalable, one that span the entire enterprise and with a co and objective and policy implementation. So cybersecurity needs to span across all the devices and running across any application, whether the application resets on the cloud or anywhere else in the infrastructure. From a networking standpoint, what does it mean? It's again, the same principles, right? You know, in order to prevent the threat actors from accessing changing best destroy or stealing sensitive data, this definition holds good for networking as well. So if you look at it from a networking perspective, it's the ability to protect from and withstand attacks on the networking systems as we continue to evolve. This will also include the ability to adapt and recover from these attacks, which is what cyber resilience aspect is all about. So cybersecurity best practices, as you know, is continuously changing the landscape primarily because the cyber threats also continue to evolve. >>Yeah, got it. So I like that. So it's gotta be integrated, it's gotta be scalable, it's gotta be comprehensive, comprehensive and adaptable. You're saying it can't be static, >>Right? Right. So I think, you know, you had a second part of a question, you know, that says what do we, you know, what are the basic principles? You know, when you think about securing network infrastructure, when you're looking at securing the network infrastructure, it revolves around core security capability of the devices that form the network. And what are these security capabilities? These are access control, software integrity and vulnerability response. When you look at access control, it's to ensure that only the authenticated users are able to access the platform and they're able to access only the kind of the assets that they're authorized to based on their user level. Now accessing a network platform like a switch or a rotor for example, is typically used for say, configuration and management of the networking switch. So user access is based on say roles for that matter in a role based access control, whether you are a security admin or a network admin or a storage admin. >>And it's imperative that logging is enable because any of the change to the configuration is actually logged and monitored as that. Talking about software's integrity, it's the ability to ensure that the software that's running on the system has not been compromised. And, and you know, this is important because it could actually, you know, get hold of the system and you know, you could get UND desire results in terms of say validation of the images. It's, it needs to be done through say digital signature. So, so it's important that when you're talking about say, software integrity, a, you are ensuring that the platform is not compromised, you know, is not compromised and be that any upgrades, you know, that happens to the platform is happening through say validated signature. >>Okay. And now, now you've now, so there's access control, software integrity, and I think you, you've got a third element which is i I think response, but please continue. >>Yeah, so you know, the third one is about civil notability. So we follow the same process that's been followed by the rest of the products within the Dell product family. That's to report or identify, you know, any kind of a vulnerability that's being addressed by the Dell product security incident response team. So the networking portfolio is no different, you know, it follows the same process for identification for tri and for resolution of these vulnerabilities. And these are addressed either through patches or through new reasons via networking software. >>Yeah, got it. Okay. So I mean, you didn't say zero trust, but when you were talking about access control, you're really talking about access to only those assets that people are authorized to access. I know zero trust sometimes is a buzzword, but, but you I think gave it, you know, some clarity there. Software integrity, it's about assurance validation, your digital signature you mentioned and, and that there's been no compromise. And then how you respond to incidents in a standard way that can fit into a security framework. So outstanding description, thank you for that. But then the next question is, how does Dell networking fit into the construct of what we've been talking about Dell trusted infrastructure? >>Okay, so networking is the key element in the Dell trusted infrastructure. It provides the interconnect between the service and the storage world. And you know, it's part of any data center configuration for a trusted infrastructure. The network needs to have access control in place where only the authorized nels are able to make change to the network configuration and logging off any of those changes is also done through the logging capabilities. Additionally, we should also ensure that the configuration should provide network isolation between say the management network and the data traffic network because they need to be separate and distinct from each other. And furthermore, even if you look at the data traffic network and now you have things like segmentation isolated segments and via VRF or, or some micro segmentation via partners, this allows various level of security for each of those segments. So it's important you know, that, that the network infrastructure has the ability, you know, to provide all this, this services from a Dell networking security perspective, right? >>You know, there are multiple layer of defense, you know, both at the edge and in the network in this hardware and in the software and essentially, you know, a set of rules and a configuration that's designed to sort of protect the integrity, confidentiality, and accessibility of the network assets. So each network security layer, it implements policies and controls as I said, you know, including send network segmentation. We do have capabilities sources, centralized management automation and capability and scalability for that matter. Now you add all of these things, you know, with the open networking standards or software, different principles and you essentially, you know, reach to the point where you know, you're looking at zero trust network access, which is essentially sort of a building block for increased cloud adoption. If you look at say that you know the different pillars of a zero trust architecture, you know, if you look at the device aspect, you know, we do have support for security for example, we do have say trust platform in a trusted platform models tpms on certain offer products and you know, the physical security know plain, simple old one love port enable from a user trust perspective, we know it's all done via access control days via role based access control and say capability in order to provide say remote authentication or things like say sticky Mac or Mac learning limit and so on. >>If you look at say a transport and decision trust layer, these are essentially, you know, how do you access, you know, this switch, you know, is it by plain hotel net or is it like secure ssh, right? And you know, when a host communicates, you know, to the switch, we do have things like self-signed or is certificate authority based certification. And one of the important aspect is, you know, in terms of, you know, the routing protocol, the routing protocol, say for example BGP for example, we do have the capability to support MD five authentication between the b g peers so that there is no, you know, manages attack, you know, to the network where the routing table is compromised. And the other aspect is about second control plane is here, you know, you know, it's, it's typical that if you don't have a control plane here, you know, it could be flooded and you know, you know, the switch could be compromised by city denial service attacks. >>From an application test perspective, as I mentioned, you know, we do have, you know, the application specific security rules where you could actually define, you know, the specific security rules based on the specific applications, you know, that are running within the system. And I did talk about, say the digital signature and the cryptographic check that we do for authentication and for, I mean rather for the authenticity and the validation of, you know, of the image and the BS and so on and so forth. Finally, you know, the data trust, we are looking at, you know, the network separation, you know, the network separation could happen or VRF plain old wheel Ls, you know, which can bring about sales multi 10 aspects. We talk about some microsegmentation as it applies to nsx for example. The other aspect is, you know, we do have, with our own smart fabric services that's enabled in a fabric, we have a concept of c cluster security. So all of this, you know, the different pillars, they sort of make up for the zero trust infrastructure for the networking assets of an infrastructure. >>Yeah. So thank you for that. There's a, there's a lot to unpack there. You know, one of the premise, the premise really of this, this, this, this segment that we're setting up in this series is really that everything you just mentioned, or a lot of things you just mentioned used to be the responsibility of the security team. And, and the premise that we're putting forth is that because security teams are so stretched thin, you, you gotta shift the vendor community. Dell specifically is shifting a lot of those tasks to their own r and d and taking care of a lot of that. So, cuz scop teams got a lot of other stuff to, to worry about. So my question relates to things like automation, which can help and scalability, what about those topics as it relates to networking infrastructure? >>Okay, our >>Portfolio, it enables state of the automation software, you know, that enables simplifying of the design. So for example, we do have, you know, you know the fabric design center, you know, a tool that automates the design of the fabric and you know, from a deployment and you know, the management of the network infrastructure that are simplicities, you know, using like Ansible s for Sonic for example are, you know, for a better sit and tell story. You know, we do have smart fabric services that can automate the entire fabric, you know, for a storage solution or for, you know, for one of the workloads for example. Now we do help reduce the complexity by closely integrating the management of the physical and the virtual networking infrastructure. And again, you know, we have those capabilities using Sonic or Smart Traffic services. If you look at Sonic for example, right? >>It delivers automated intent based secure containerized network and it has the ability to provide some network visibility and Avan has and, and all of these things are actually valid, you know, for a modern networking infrastructure. So now if you look at Sonic, you know, it's, you know, the usage of those tools, you know, that are available, you know, within the Sonic no is not restricted, you know, just to the data center infrastructure is, it's a unified no, you know, that's well applicable beyond the data center, you know, right up to the edge. Now if you look at our north from a smart traffic OS 10 perspective, you know, as I mentioned, we do have smart traffic services which essentially, you know, simplifies the deployment day zero, I mean rather day one, day two deployment expansion plans and the lifecycle management of our conversion infrastructure and hyper and hyper conversion infrastructure solutions. And finally, in order to enable say, zero touch deployment, we do have, you know, a VP solution with our SD van capability. So these are, you know, ways by which we bring down the complexity by, you know, enhancing the automation capability using, you know, a singular loss that can expand from a data center now right to the edge. >>Great, thank you for that. Last question real quick, just pitch me, what can you summarize from your point of view, what's the strength of the Dell networking portfolio? >>Okay, so from a Dell networking portfolio, we support capabilities at multiple layers. As I mentioned, we're talking about the physical security for examples, say disabling of the unused interface. Sticky Mac and trusted platform modules are the things that to go after. And when you're talking about say secure boot for example, it delivers the authenticity and the integrity of the OS 10 images at the startup. And Secure Boot also protects the startup configuration so that, you know, the startup configuration file is not compromised. And Secure port also enables the workload of prediction, for example, that is at another aspect of software image integrity validation, you know, wherein the image is data for the digital signature, you know, prior to any upgrade process. And if you are looking at secure access control, we do have things like role based access control, SSH to the switches, control plane access control that pre do tags and say access control from multifactor authentication. >>We do have various tech ads for entry control to the network and things like CSE and PRV support, you know, from a federal perspective we do have say logging wherein, you know, any event, any auditing capabilities can be possible by say looking at the clog service, you know, which are pretty much in our transmitter from the devices overts for example, and last we talked about say network segment, you know, say network separation and you know, these, you know, separation, you know, ensures that are, that is, you know, a contained say segment, you know, for a specific purpose or for the specific zone and, you know, just can be implemented by a, a micro segmentation, you know, just a plain old wheel or using virtual route of framework VR for example. >>A lot there. I mean I think frankly, you know, my takeaway is you guys do the heavy lifting in a very complicated topic. So thank you so much for, for coming on the cube and explaining that in in quite some depth. Really appreciate it. >>Thank you indeed. >>Oh, you're very welcome. Okay, in a moment I'll be back to dig into the hyper-converged infrastructure part of the portfolio and look at how when you enter the world of software defined where you're controlling servers and storage and networks via software led system, you could be sure that your infrastructure is trusted and secure. You're watching a blueprint for trusted infrastructure made possible by Dell Technologies and collaboration with the cube, your leader in enterprise and emerging tech coverage, your own west product management security lead at for HCI at Dell Technologies hyper-converged infrastructure. Jerome, welcome. >>Thank you Dave. >>Hey Jerome, in this series of blueprint for trusted infrastructure, we've been digging into the different parts of the infrastructure stack, including storage servers and networking, and now we want to cover hyperconverged infrastructure. So my first question is, what's unique about HCI that presents specific security challenges? What do we need to know? >>So what's unique about hyper-converge infrastructure is the breadth of the security challenge. We can't simply focus on a single type of IT system. So like a server or storage system or a virtualization piece of software, software. I mean HCI is all of those things. So luckily we have excellent partners like VMware, Microsoft, and internal partners like the Dell Power Edge team, the Dell storage team, the Dell networking team, and on and on. These partnerships in these collaborations are what make us successful from a security standpoint. So let me give you an example to illustrate. In the recent past we're seeing growing scope and sophistication in supply chain attacks. This mean an attacker is going to attack your software supply chain upstream so that hopefully a piece of code, malicious code that wasn't identified early in the software supply chain is distributed like a large player, like a VMware or Microsoft or a Dell. So to confront this kind of sophisticated hard to defeat problem, we need short term solutions and we need long term solutions as well. >>So for the short term solution, the obvious thing to do is to patch the vulnerability. The complexity is for our HCI portfolio. We build our software on VMware, so we would have to consume a patch that VMware would produce and provide it to our customers in a timely manner. Luckily VX rail's engineering team has co engineered a release process with VMware that significantly shortens our development life cycle so that VMware would produce a patch and within 14 days we will integrate our own code with the VMware release we will have tested and validated the update and we will give an update to our customers within 14 days of that VMware release. That as a result of this kind of rapid development process, VHA had over 40 releases of software updates last year for a longer term solution. We're partnering with VMware and others to develop a software bill of materials. We work with VMware to consume their software manifest, including their upstream vendors and their open source providers to have a comprehensive list of software components. Then we aren't caught off guard by an unforeseen vulnerability and we're more able to easily detect where the software problem lies so that we can quickly address it. So these are the kind of relationships and solutions that we can co engineer with effective collaborations with our, with our partners. >>Great, thank you for that. That description. So if I had to define what cybersecurity resilience means to HCI or converged infrastructure, and to me my takeaway was you gotta have a short term instant patch solution and then you gotta do an integration in a very short time, you know, two weeks to then have that integration done. And then longer term you have to have a software bill of materials so that you can ensure the providence of all the components help us. Is that a right way to think about cybersecurity resilience? Do you have, you know, a additives to that definition? >>I do. I really think that's site cybersecurity and resilience for hci because like I said, it has sort of unprecedented breadth across our portfolio. It's not a single thing, it's a bit of everything. So really the strength or the secret sauce is to combine all the solutions that our partner develops while integrating them with our own layer. So let me, let me give you an example. So hci, it's a, basically taking a software abstraction of hardware functionality and implementing it into something called the virtualized layer. It's basically the virtual virtualizing hardware functionality, like say a storage controller, you could implement it in hardware, but for hci, for example, in our VX rail portfolio, we, our Vxl product, we integrated it into a product called vsan, which is provided by our partner VMware. So that portfolio of strength is still, you know, through our, through our partnerships. >>So what we do, we integrate these, these security functionality and features in into our product. So our partnership grows to our ecosystem through products like VMware, products like nsx, Horizon, Carbon Black and vSphere. All of them integrate seamlessly with VMware and we also leverage VMware's software, part software partnerships on top of that. So for example, VX supports multifactor authentication through vSphere integration with something called Active Directory Federation services for adfs. So there's a lot of providers that support adfs including Microsoft Azure. So now we can support a wide array of identity providers such as Off Zero or I mentioned Azure or Active Directory through that partnership. So we can leverage all of our partners partnerships as well. So there's sort of a second layer. So being able to secure all of that, that provides a lot of options and flexibility for our customers. So basically to summarize my my answer, we consume all of the security advantages of our partners, but we also expand on them to make a product that is comprehensively secured at multiple layers from the hardware layer that's provided by Dell through Power Edge to the hyper-converged software that we build ourselves to the virtualization layer that we get through our partnerships with Microsoft and VMware. >>Great, I mean that's super helpful. You've mentioned nsx, Horizon, Carbon Black, all the, you know, the VMware component OTH zero, which the developers are gonna love. You got Azure identity, so it's really an ecosystem. So you may have actually answered my next question, but I'm gonna ask it anyway cuz you've got this software defined environment and you're managing servers and networking and storage with this software led approach, how do you ensure that the entire system is secure end to end? >>That's a really great question. So the, the answer is we do testing and validation as part of the engineering process. It's not just bolted on at the end. So when we do, for example, VxRail is the market's only co engineered solution with VMware, other vendors sell VMware as a hyper converged solution, but we actually include security as part of the co-engineering process with VMware. So it's considered when VMware builds their code and their process dovetails with ours because we have a secure development life cycle, which other products might talk about in their discussions with you that we integrate into our engineering life cycle. So because we follow the same framework, all of the, all of the codes should interoperate from a security standpoint. And so when we do our final validation testing when we do a software release, we're already halfway there in ensuring that all these features will give the customers what we promised. >>That's great. All right, let's, let's close pitch me, what would you say is the strong suit summarize the, the strengths of the Dell hyper-converged infrastructure and converged infrastructure portfolio specifically from a security perspective? Jerome? >>So I talked about how hyper hyper-converged infrastructure simplifies security management because basically you're gonna take all of these features that are abstracted in in hardware, they're now abstracted in the virtualization layer. Now you can manage them from a single point of view, whether it would be, say, you know, in for VX rail would be b be center, for example. So by abstracting all this, you make it very easy to manage security and highly flexible because now you don't have limitations around a single vendor. You have a multiple array of choices and partnerships to select. So I would say that is the, the key to making it to hci. Now, what makes Dell the market leader in HCI is not only do we have that functionality, but we also make it exceptionally useful to you because it's co engineered, it's not bolted on. So I gave the example of spo, I gave the example of how we, we modify our software release process with VMware to make it very responsive. >>A couple of other features that we have specific just to HCI are digitally signed LCM updates. This is an example of a feature that we have that's only exclusive to Dell that's not done through a partnership. So we digitally signed our software updates so the user can be sure that the, the update that they're installing into their system is an authentic and unmodified product. So we give it a Dell signature that's invalidated prior to installation. So not only do we consume the features that others develop in a seamless and fully validated way, but we also bolt on our own a specific HCI security features that work with all the other partnerships and give the user an exceptional security experience. So for, for example, the benefit to the customer is you don't have to create a complicated security framework that's hard for your users to use and it's hard for your system administrators to manage it all comes in a package. So it, it can be all managed through vCenter, for example, or, and then the specific hyper, hyper-converged functions can be managed through VxRail manager or through STDC manager. So there's very few pains of glass that the, the administrator or user ever has to worry about. It's all self contained and manageable. >>That makes a lot of sense. So you've got your own infrastructure, you're applying your best practices to that, like the digital signatures, you've got your ecosystem, you're doing co-engineering with the ecosystems, delivering security in a package, minimizing the complexity at the infrastructure level. The reason Jerome, this is so important is because SecOps teams, you know, they gotta deal with cloud security, they gotta deal with multiple clouds. Now they have their shared responsibility model going across multiple cl. They got all this other stuff that they have to worry, they gotta secure the containers and the run time and and, and, and, and the platform and so forth. So they're being asked to do other things. If they have to worry about all the things that you just mentioned, they'll never get, you know, the, the securities is gonna get worse. So what my takeaway is, you're removing that infrastructure piece and saying, Okay guys, you now can focus on those other things that is not necessarily Dell's, you know, domain, but you, you know, you can work with other partners to and your own teams to really nail that. Is that a fair summary? >>I think that is a fair summary because absolutely the worst thing you can do from a security perspective is provide a feature that's so unusable that the administrator disables it or other key security features. So when I work with my partners to define, to define and develop a new security feature, the thing I keep foremost in mind is, will this be something our users want to use and our administrators want to administer? Because if it's not, if it's something that's too difficult or onerous or complex, then I try to find ways to make it more user friendly and practical. And this is a challenge sometimes because we are, our products operate in highly regulated environments and sometimes they have to have certain rules and certain configurations that aren't the most user friendly or management friendly. So I, I put a lot of effort into thinking about how can we make this feature useful while still complying with all the regulations that we have to comply with. And by the way, we're very successful in a highly regulated space. We sell a lot of VxRail, for example, into the Department of Defense and banks and, and other highly regulated environments and we're very successful there. >>Excellent. Okay, Jerome, thanks. We're gonna leave it there for now. I'd love to have you back to talk about the progress that you're making down the road. Things always, you know, advance in the tech industry and so would appreciate that. >>I would look forward to it. Thank you very much, Dave. >>You're really welcome. In a moment I'll be back to summarize the program and offer some resources that can help you on your journey to secure your enterprise infrastructure. I wanna thank our guests for their contributions in helping us understand how investments by a company like Dell can both reduce the need for dev sec up teams to worry about some of the more fundamental security issues around infrastructure and have greater confidence in the quality providence and data protection designed in to core infrastructure like servers, storage, networking, and hyper-converged systems. You know, at the end of the day, whether your workloads are in the cloud, on prem or at the edge, you are responsible for your own security. But vendor r and d and vendor process must play an important role in easing the burden faced by security devs and operation teams. And on behalf of the cube production content and social teams as well as Dell Technologies, we want to thank you for watching a blueprint for trusted infrastructure. Remember part one of this series as well as all the videos associated with this program and of course today's program are available on demand@thecube.net with additional coverage@siliconangle.com. And you can go to dell.com/security solutions dell.com/security solutions to learn more about Dell's approach to securing infrastructure. And there's tons of additional resources that can help you on your journey. This is Dave Valante for the Cube, your leader in enterprise and emerging tech coverage. We'll see you next time.

Published Date : Oct 4 2022

SUMMARY :

So the game of Whackamole continues. But the diversity of alternatives and infrastructure implementations continues to how the industry generally in Dell specifically, are adapting to We're thrilled to have you here and hope you enjoy the program. We also hit on the storage part of the portfolio. So all of this complexity provides a lot of opportunity for attackers because it's expanding and the security mentality that, you know, security should enable our customers to go focus So I'm glad you you, you hit on that, but so given what you just said, what And in addition to this, Dell makes the commitment that we will rapidly how the threads have evolved, and we have also seen the regulatory trends and So thank you for that. And this is the principles that we use on power Edge, So the idea is that service first and foremost the chassis, the box, the several box is opened up, it logs alerts, and you can figure Great, thank you for that lot. So now the complexity that we are dealing with like was So once the customers receive the system at their end, do is quickly take a look at all the different pieces and compare it to the vulnerability you know, give us the sort of summary from your perspective, what are the key strengths of And as part of that like you know, security starts with the supply chain. And we also have dual layer encryption where you of the other things that they have to worry about, which are numerous. Technologies on the cube, your leader in enterprise and emerging tech coverage. So the question is from Dell's perspective, what's unique and to secure the network infrastructure In today's, you know, data driven world, it operates I like the way you phrase that. So if you look at it from a networking perspective, it's the ability to protect So I like that. kind of the assets that they're authorized to based on their user level. And it's imperative that logging is enable because any of the change to and I think you, you've got a third element which is i I think response, So the networking portfolio is no different, you know, it follows the same process for identification for tri and And then how you respond to incidents in a standard way has the ability, you know, to provide all this, this services from a Dell networking security You know, there are multiple layer of defense, you know, both at the edge and in the network in And one of the important aspect is, you know, in terms of, you know, the routing protocol, the specific security rules based on the specific applications, you know, that are running within the system. really that everything you just mentioned, or a lot of things you just mentioned used to be the responsibility design of the fabric and you know, from a deployment and you know, the management of the network and all of these things are actually valid, you know, for a modern networking infrastructure. just pitch me, what can you summarize from your point of view, is data for the digital signature, you know, prior to any upgrade process. can be possible by say looking at the clog service, you know, I mean I think frankly, you know, my takeaway is you of the portfolio and look at how when you enter the world of software defined where you're controlling different parts of the infrastructure stack, including storage servers this kind of sophisticated hard to defeat problem, we need short term So for the short term solution, the obvious thing to do is to patch bill of materials so that you can ensure the providence of all the components help So really the strength or the secret sauce is to combine all the So our partnership grows to our ecosystem through products like VMware, you know, the VMware component OTH zero, which the developers are gonna love. life cycle, which other products might talk about in their discussions with you that we integrate into All right, let's, let's close pitch me, what would you say is the strong suit summarize So I gave the example of spo, I gave the example of how So for, for example, the benefit to the customer is you The reason Jerome, this is so important is because SecOps teams, you know, they gotta deal with cloud security, And by the way, we're very successful in a highly regulated space. I'd love to have you back to talk about the progress that you're making down the Thank you very much, Dave. in the quality providence and data protection designed in to core infrastructure like

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValantePERSON

0.99+

DeepakPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

Mahesh NagerPERSON

0.99+

DellORGANIZATION

0.99+

Jerome WestPERSON

0.99+

MaheshPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

demand@thecube.netOTHER

0.99+

Department of DefenseORGANIZATION

0.99+

Dave AntePERSON

0.99+

second partQUANTITY

0.99+

first questionQUANTITY

0.99+

VX railORGANIZATION

0.99+

FirstQUANTITY

0.99+

two weeksQUANTITY

0.99+

last yearDATE

0.99+

Deepak AragePERSON

0.99+

14 daysQUANTITY

0.99+

second componentQUANTITY

0.99+

second layerQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

A Blueprint for Trusted Infrastructure Made PossibleTITLE

0.99+

hundredsQUANTITY

0.99+

one partQUANTITY

0.99+

bothQUANTITY

0.98+

VMwareORGANIZATION

0.98+

VHAORGANIZATION

0.98+

coverage@siliconangle.comOTHER

0.98+

hundred percentQUANTITY

0.98+

eachQUANTITY

0.98+

vSphereTITLE

0.98+

dell.com/securityOTHER

0.98+

Multicloud Roadmap, the Gateway to Supercloud | Supercloud22


 

(soft music) >> Welcome back everyone, is Supercloud 22 live in the Palo Alto office. Our stage performance we're streaming virtually it's our pilot event, our inaugural event, Supercloud 22. I'm John fury, with my coach Dave Vellante. Got a featured Keynote conversation with Kit Colbert. Who's the CTO of VMware, got to delay it all out. Break it down, Kit, great to see you. Thanks for joining us for Supercloud 22 our inaugural event. >> Yeah, I'm excited to be here. Thanks for having me. >> So we had great distinguished panels coming up through. We heard Victoria earlier to the Keynote. There's a shift happening. The shift has happened that's called cloud. You just published a white paper that kind of brings out these new challenges around the complexity of how companies want to run their business. >> Yep. >> It's not born in the cloud, it's cloud everywhere. Seems to be the theme. What's your take on Supercloud? what's the roadmap for multicloud? >> Yeah, well, the reason that we got interested in this was just talking to our customers and the reality is everybody is using multiple clouds today, multiple public clouds, they got things on-prem, they got stuff at the edge. And so their applications are essentially distributed everywhere. And the challenges they start running into there is that there's just a lot of heterogeneity there. There's like different APIs, different capabilities, inconsistencies, incompatibility, in terms of workload, placement, data, migration, security, as we just heard about, et cetera. And so I think everyone's struggling with trying to figure out how do I drive consistency across all that diversity and what sort of consistency do I want? And one of the things that became really interesting in our conversations with customers is that there is no one size fits all that different folks are in different places. And the types of consistency that they want to prioritize will be different based on their individual business requirements. And so this started forming a picture for us saying, okay, what we need are a set of capabilities of multi-cloud cross cloud services that deliver that consistency across all the different environments where applications may be running. And that is what formed the early thinking and sort of the paper that we wrote on it, as well as some of the work and that I think eventually leads to this vision of Supercloud, right? 'Cause I think you guys have the right idea, which is, hey, how does all this stuff come together? And what does that bigger picture look like? And so I think between the sort of the native services that are there individually for each cloud that offer great value by the way, and people definitely should be taking advantage of in addition to another set of services, which are multi-cloud that go across clouds and provide that consistency, looking at that together. That's my picture where super cloud is. >> So the paper's called, the era of multi-cloud services arrive, VMware executive outlook for IT, leaders and decision makers, I'm sure you can get on your website. >> Yep. >> And in there, you talked about, well, first of all, I think you would agree that multicloud has fundamentally been a symptom of multi-vendor or M&A, I mean, you talked about that in the paper, right? >> Yeah. >> It was never really a strategy. It was just like, hey, we woke up in the 2020s and here we are with multiple clouds, right? >> Yeah, it was one of those situations where most folks that we talked to didn't plan to be multi-cloud now that's changed a little bit in the past year or two. >> Sure. >> But certainly in the earlier days of cloud, people would go all in saying, hey, I'm going to go all in on one, one of the major hyperscalers and go for it there. And that's great and offers a lot of advantages, right? There is internal consistency there. There's usually pretty good integration between their services so on and so forth. The problem though that you start facing is that to your point, acquisitions, you acquire companies using a different cloud. Okay, now I got two different clouds or sometimes you have the phenomenon of shadow IT, still happening where some random line of business is going to go off and use a different cloud for whatever reason. The other thing that we've seen is that over time that you may have standardized on one, but then over time technology changes, another cloud makes major advancements in the state of the art, or let's say in machine learning and you say, hey, I want to go to this other cloud for that. So what we start to see is that people now are choosing public clouds based on best of breed service capabilities, and that they're going to make those decisions that fairly fine grained manner, right? Sometimes down to the team, the line of business, et cetera. And so this is where customers and companies find themselves. Now it's like, oh boy, now have all these clouds. And what's happened is that they kind of dealt with it in an ad hoc manner. They would spin up individual operations teams, security teams, et cetera, that specialized in each of the clouds. They had knowledge about how to do that. But now people found that, okay, I'm duplicating all this. There's not really consistency in my approach here. Is there a better way? And I think this is, again, the advent of a lot of the thinking of multi-cloud services and Supercloud. >> And I think one of the things too, in listening to you talk is that the old model used to be, solve complexity with more complexity. Okay, and customers don't want that from what we're observing. And what you're saying is they've seen the benefits of DevOps, DevSecOps. So they know the value. >> Yep. >> 'Cause they've been on, say one native cloud. Now they say, okay, I'm on premise and we heard from Victoria said, there's a lot of private cloud going on, but essentially makes that another cloud, out by default as well. So hybrid is multicloud. >> Hybrid is a subset, yeah. Hybrid is like, we kind of had this evolution of thinking, right? Where you kind of had all the sort of different locations. And then I think hybrid was attempt to say, okay, let's try to connect one location or a set of locations on premises with a public cloud and have some level of consistency there. But really what we look at here with multicloud or Supercloud is that that's really a generalization of that. And we're not talking about one or two locations on prem in one cloud. We're talking about everything now. And moreover, I think hybrid cloud tended to focus a lot on sort of core infrastructure and management. This looks across the board, we're talking about security, we're talking about application development, talking about end user experience. Things like Zero Trust. We're talking about infrastructure, data. So it goes much, much broader, I think than when we talked about hybrid cloud a few years ago. >> So in your paper you've essentially, Kit, laid out an early framework. >> Yep. >> Let's call it for what we call Supercloud, what you call cross cloud services. So what do you see as the technical enablers that are, the salient aspects of again multi-cloud or Supercloud? >> Yep. Well, so for me it comes down to, so, okay, taking a step back. So we have this problem, right? Where you have a lot of diversity across different clouds and customers are looking for some levels of consistency. But as I said, rarely do I see two customers that want exactly the same types of consistency. And so what we're trying to do is step back. And first of all, establish a taxonomy and by that I mean, one of the different types of consistency that you might want. And so there's things around infrastructure consistency, security consistency, software supply chain security is probably the top of mind one that I hear from customers. Application and application services of things like databases, messaging streaming services, AIML services, et cetera, and user capabilities and then of course, data as well. And so in the paper we say, okay, here's these kind of five areas of consistency. And that's the first piece, the second one then turns more to an architectural question of what exactly is a multi-cloud service. What does that mean for a cloud service to be multi-cloud and what are the properties there? So essentially we said, okay, we see three different types of those. There's one where that service could run on a single cloud, but could support multiple clouds. So think about for instance, a service that does cost analysis. Now it may have maybe executing on AWS let's say, but it could do cost analysis for Azure or Google or AWS or anybody, right? So that's the first type. The second type is a bit more advanced where now you're saying, I can actually instantiate that same service into multiple clouds. And we see that oftentimes with things like databases that have a lot of performance latency, et cetera, requirements, and that you can't be accessing that database remotely, that doesn't, from a different cloud, that's going to be too slow. You have it on the same cloud that you're in. And so again, you see various vendors out there, implementing that, where that database can be instantiated wherever you'd like. And then the third one would be going even further. And this is where we really get into some of the much more difficult use cases where customers want a workload to be on prem. And sometimes, especially for those that are very regulatory compliant, they may need even in an air gap or disconnected environment. So there, can you take that same service, but now run it without your operators, being able to manage it 24/7. So those are the three categories. So are a single cloud supporting, single cloud instance supporting multiple clouds, multi-cloud instance, multi-cloud instance disconnected. >> So you're abstracting you as the the R&D arm you're abstracting that complexity. How do you handle this problem where you've got one cloud maybe has a better service than the other clouds? Do you have to devolve to the lowest common denominator or? How do you mask that? >> Well, so that's a really good question and we've debated it and there's been a lot of thought on it. Our current point of view is that we really want to leave it, up to the company themselves to make that decision. Again, cause we see different use cases. So for instance, I talk to customers in the defense sector and they are like, hey, if a foreign adversary is attacking one of these public cloud that we're in, we got to be able to evacuate our applications from there, sometimes in minutes, right? In order to maintain our operational capabilities. And so there, there does need to be at least common denominator approach just because of that requirement. I see other folks, you look at the financial banking industries they're also regulated. I think for them, it's oftentimes 90 days to get out of the cloud, so they can do a little bit of re-architecture. You got times rolled the sleeves and change some things. So maybe it's not quite as strict. Whereas other companies say, you know what? I want to take advantage of these best of breed services native to the clouds. So we don't try to prescribe a certain approach there, but we say, you got to align it with what your business requirements are. >> How about the APIs layer? So one of the things we've said is that we felt like a super pass was a requirement of the Supercloud because it's a purpose built pass that helps you with that objective, whatever that is. And you say in the paper for developers each cloud provider has unique infrastructure interfaces and APIs that add work and slow the pace of their releases for operators. Each additional cloud increases the complexity of their architecture, fragmenting security, performance optimization and cost management. So are you building a super pass? What's your philosophy? Victoria said, we want to have our cake, we want to eat at two and we want to lose weight. So how do you do that? >> Yeah, so I think it's, so first things first, what the paper is trying to present in the end is really sort of an architectural point of view on how to approach this, right? And then, yeah, we at VMware, we've got a lot of solutions, towards some of those things, but we also realize we can't do everything ourselves, right? The space is too large. So it's very much a partner strategy there. Now that being said, on things like on the past side, we are doing a lot for instance around Tanzu, which is our modern apps portfolio products. And the focus there really is to, yes, provide some of that consistency across different clouds, enabling customers to take advantage of either cross cloud paths type services or cloud native or native cloud services, I should say. And so we really give customers that choice. And I think that's for us where it's at, because again, we don't see it as a one size fits for all. >> So there's your cake at edit to too. So you're saying the developer experience can be identical across clouds. >> Yep. >> Unless the developers don't want it to be. >> Yeah, and maybe the team makes that decision. Look there's a lot of reasons why you may want to make that or may not. The reality is that these native cloud services do add a lot of value and oftentimes are very easy to consume, to get started with, to get going. And so trade off you got to think about, and I don't think there's a right answer. >> So Kit, I got to ask on you. You said you can't do it alone. >> Yeah. >> VMware, I know for a fact, you guys have been working on this for many, many years. >> Yep. >> (indistinct) remember, I interviewed him in 2016 when he did the deal with AWS with Andy Jassy that really moved the needle. Things got really great from there with VMware. So would you be open to a consortium to oversee cause you guys have a lot of investment in this as a company, but I also don't hear you trying to do the lock in thing. So yeah, would you guys be open to a consortium to kind of try to figure out what these buildings blocks look like? Or is it a bag of Legos what people want? >> Absolutely, and you know what we offer in the paper is really just a starting point. It's pretty simple, we're trying to define a few basic of the taxonomy and some outlines sketches if you will, of what that architectural picture might look like. But it's very much that like just a starting point, and this is not something we can do alone. This is something that we really need the entire industry to rally around. Cause again, I think what's important here are standards. >> Yeah. >> That there's got to be, this sort of decomposition of functionality, breakdown in the different, sort of logical layers of functionality. What do those APIs or interfaces look like? How do we ensure interoperability? Because we do want people to be able to get the best of breed, to be able to bring together different vendor solutions to enable that. >> And I was watching, it was had a Silicon a day just last week, talking about their advances in Silicon. What's you guys position on that because you're seeing the (indistinct) as players, almost getting more niche and more better at the hardware matters more, Silicon speed, latency GPUs, So that seems to me be an enabler opportunity for the ecosystem to innovate at the past and SAS relationship. Where do you guys see? Where are you guys strong and where do you need work to do on? If you had to say there was some white space at VMware like say, hey, we own this area. We we're solid here. Here's some white spaces that VMware could use some help with. >> Yeah, well I think the infrastructure space, you just mentioned is clearly one that we've been focused on for a long time. We're expanding into the modern app space, expanding into security. We've been strong and end user for a while. So a lot of the different multi-cloud capabilities we've actually been to your point developing for a while. And I think that's exactly, again, what went into this like what we started noticing was all of our different product teams were reacting to the same thing and we weren't necessarily talking about it together yet. >> Like what? >> Well, this whole challenge of multiple clouds of dealing with that heterogeneity of wanting choice and flexibility into where to place a workload or where to place a virtual desktop or whatever it might be. And so each of the teams was responding individually to that customer feedback. And so I think what we recognized was like, hey, let's up level this, and what's the bigger picture. And what's the sort of common architecture across all of it, right? So I think that's what the really interesting aspect here was is that this is very much driven by what we're hearing directly from customers. >> You kind of implied just recently that the paper was pretty straightforward, pretty basic, early days, but it's well thought out. And one of the things you talked about was the type of multi-cloud services. >> Yep. >> You had data plan and user services, security infrastructure, which is your wheelhouse and application services. >> Yep. >> And you sort of went to detail defining those where is management and all that. So these are the ones you're going after. What about management? What are your thoughts on that? >> Yeah, so it's a really good question we debated this for a long time. Does management actually get a separate sort of layer that we could add a six one perhaps, or is it sort of baked in to the different ones? And we kind of went with the ladder where it sort of baked in there's infrastructure management, there's modern app management, there's management and users. It's kind of management for each security obviously. So we see a lot of different management plans, control plans across each of those different layers. Now does there need to be a separate one that has its own layer? Arguably yes, I mean, I think there are good arguments for that, and this is exactly why we put this out there though, is to like get people to read it, people to give give us feedback. And going back to the consortium idea, let's come together as a group of practitioners across the industry to really figure out an industry viewpoint on this. >> So what are the trade offs there? So what would be the benefit of having that separate layer? I presume it's simpler to do it the way you've done it, but what would be the benefit of having a separate. >> Yeah, I think it was probably more about simplicity to start with, like you could imagine like 20 different layers. and maybe that's where it's going to go, but also I think it's how do you define the layer? And for us it was more around sort of some of these functional aspects as an infrastructure versus application level versus end user and management is more of a commonality across those. But again, I could see our arguments be made. >> Logical place to start. >> Yeah. >> The other thing you said in here multi-cloud application services can route request for a particular service such as a database and deploy the service on the correct individual cloud, using the most appropriate technology for the use case, et cetera, et cetera. >> Yep. >> That to me, sounds like a metadata problem. And so can you talk about how you you've approach that? You mentioned AWS RDS, great examples as your sequel on Oracle Database, et cetera, et cetera and multiple endpoint. How do you approach that? >> Yeah, well, I think there's a bunch of different approaches there. And so again, so the idea is that, and I know there's been reference to sort of like the operating system for Supercloud. What does that look like, right? But I think it totally, we don't actually use that term, but I do like the concept of an operating system. 'Cause a lot of things you just talk about there, these are things operating systems. Do you got to have a scheduler? And so you look across many different clouds and you got to figure out, okay, where do I actually want in this case, let's say a database instance to go and be provisioned. And then really it's up to, I think the vendor or in this case, the multi-cloud service creator to define how they want to want to do that. They could leverage the native cloud services or they could build their own technology. Which a lot of the vendors are doing. And so the point though, is that really you get this night from a end user standpoint, it goes back to your complexity, simplicity question, you get the simplicity of a single API that the implementation you don't really need to deal with. 'Cause you're like, I'm getting a service and I need the database and has certain properties and I want it here versus there versus wherever. But it's up to that multi-cloud service to figure out a lot of those implementation specifics. >> So are you the Supercloud OS? >> I think it is VMware's goal to become the Supercloud OS for sure. But like any good operating system, as we said, like it's all about applications, right? So you have a platform point of view, but you got to partner widely. >> And you got to get the hardware relationship. >> Yes. >> The Silicon chips. >> Yep. >> Right. >> Yeah, and actually that was a good point. I want to go back to that one. 'Cause you mentioned that earlier, the innovation that we're seeing, things like arm processors and like graviton and a lot of these things happening. And so I think that's another really interesting area where you're seeing tremendous innovation there in the public cloud. One of the challenges though for public cloud is actually at scale and that it takes longer to release newer hardware at that scale. So in some cases, if you want bleeding edge stuff, you can't go with public cloud 'cause it's just not there yet, right? So that's again, another interesting thing where you... >> Well, some will say that they launch 5,000 new services, every year at AWS. >> No, but I'm talking, >> They have some bleeding edge stuff. >> Well, no, no, no, sorry, sorry, let me clarify, let me clarify. I'm not talking about the software, I'm talking about the hardware side. >> Okay, got it, okay. >> Like the Silicon? >> Yeah, like the latest and greatest GPU, FBGA. >> Why can't they? >> 'Cause cause they do like tens of thousands of them, hundreds of thousands of them. >> Oh just because it's just so many. >> It's a scale. Yeah, that's the point, right? >> Right. >> And it's fundamental to the model in terms of how big they are. And so that's why we do see some customers who need, who have very specialized hardware requirements, need to do it in the private cloud, right on prem or possibly a colo. >> Or edge. >> Or edge. >> Edge is a great example of... >> But we often see, again, people like the latest bleeding edge GPUs, whatever they are, even something a bit more experimental that they're going to go on on prem for that. >> Yeah. >> And so look, do not want to disparage the public cloud, please don't take that away. It's just an artifact when it gets to heart, like software they can scale and they do (indistinct). >> Well it's context of the OS conversation, OS has to right to hardware and enable applications. >> Where I was getting caught up in that is Kit, is they're all developing their own Silicon and they're developing it, most of it's arm based and they're developing at a much, much faster cycle. They can go from design to tape out much faster than Intel historically has. And you're seeing it. >> Intel just posted along. >> Yeah, I think if you look at the overall system, you're absolutely right. >> Yeah, but it's the deployment because of the scale 'cause at one availability zone and another and another region and that's. >> Well, yeah, but so counter point to what I just said would be, hey, like they have very well controlled environments, very well controled system. So they don't need to support a million different configuration settings or whatever they've got theirs that they use, right? So from a system standpoint and so forth. Yeah, I agree that there's a lot they can do there. I was speaking specifically, to different types of hardware accelerators being a bit of a (indistinct). >> If it's not in the 5,000 services that they offer, you can't get it, whereas on-prem you can say, I want that, here it is. >> I'm not saying that on-prem is necessarily fundamentally better in any way. I'm just saying for this particular area >> It's use case driven. >> It is use, and that's the whole point of all this, right? Like and I know a lot of people in their heads associate VMware with on-prem, but we are not dogmatic at all. And you know, as you guys know, but many people may not like we partner with all the public cloud hyperscalers. And so our point of view is very much, much more nuance saying, look, we're happy to run workloads wherever you want to. In fact, that's what we hear from customers. They want to run them everywhere, but it's about finding the right tool for the right job. And that's what really what this multi-cloud approach. >> Yeah, and I think the structural change of the virtualization hypervisor this new shift to V2 Supercloud, this something happening fundamentally that's use case driven, it's not about dogma, whatever. I mean, cloud's great. But native clouds have the pros and cons. >> And I would say that Supercloud, prerequisite for Supercloud has got to be running in a public cloud. But I'd say it also has to be inclusive of on-prem data. >> Yes, absolutely. >> And you're not going to just move all that data into prem, maybe in the fullness of time, but I don't personally believe that, but you look at what Goldman Sachs has done with AWS they've got their on-prem data and they're connecting to the AWS cloud. >> Yep. >> What Walmart's doing with Azure and that's going to happen in a lot of different industries. >> Yeah. >> Well I think security will drive that too. We had that conversation because no one wants to increase the surface area. Number one, they want complexity to be reduced and they want economic benefits. That's the super cloud kind of (indistinct). >> It's a security but it's also differentiatable advantage that you actually have on prem that you don't necessarily. >> Right, well, we're going to debate this now, Kit, thank you for coming on and giving that Keynote, we're going to have a panel to debate and discuss the blockers that enablers to Supercloud. And there are some enablers and potentially blockers. >> Yep, absolutely. >> So we'll get, into that, okay, up next, the panel to discuss, blockers and enablers are Supercloud after this quick break. (soft music)

Published Date : Sep 9 2022

SUMMARY :

in the Palo Alto office. Yeah, I'm excited to be here. We heard Victoria earlier to the Keynote. It's not born in the and sort of the paper that we wrote on it, So the paper's called, and here we are with bit in the past year or two. is that to your point, in listening to you talk is and we heard from Victoria said, is that that's really a So in your paper you've essentially, So what do you see as the And so in the paper we say, How do you mask that? is that we really want to leave it, So one of the things we've said And the focus there really is to, So there's your cake at edit to too. Unless the developers And so trade off you got to think about, So Kit, I got to ask on you. you guys have been working to oversee cause you guys have and some outlines sketches if you will, breakdown in the different, So that seems to me be So a lot of the different And so each of the teams And one of the things you talked about and application services. And you sort of went And going back to the consortium idea, of having that separate layer? and management is more of and deploy the service on And so can you talk about that the implementation you So you have a platform point of view, And you got to get the and a lot of these things happening. they launch 5,000 new services, I'm not talking about the software, Yeah, like the latest hundreds of thousands of them. that's the point, right? And it's fundamental to the model that they're going to And so look, of the OS conversation, to tape out much faster Yeah, I think if you because of the scale 'cause to what I just said would be, If it's not in the 5,000 I'm not saying that on-prem Like and I know a lot of people of the virtualization hypervisor And I would say that Supercloud, and they're connecting to the AWS cloud. and that's going to happen in and they want economic benefits. that you actually have on prem that enablers to Supercloud. So we'll get,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Kit ColbertPERSON

0.99+

2016DATE

0.99+

AWSORGANIZATION

0.99+

90 daysQUANTITY

0.99+

Andy JassyPERSON

0.99+

Palo AltoLOCATION

0.99+

Goldman SachsORGANIZATION

0.99+

VictoriaPERSON

0.99+

first pieceQUANTITY

0.99+

WalmartORGANIZATION

0.99+

two customersQUANTITY

0.99+

second typeQUANTITY

0.99+

oneQUANTITY

0.99+

5,000 servicesQUANTITY

0.99+

2020sDATE

0.99+

20 different layersQUANTITY

0.99+

EachQUANTITY

0.99+

Supercloud 22EVENT

0.99+

5,000 new servicesQUANTITY

0.99+

first typeQUANTITY

0.99+

GoogleORGANIZATION

0.99+

last weekDATE

0.99+

three categoriesQUANTITY

0.99+

VMwareORGANIZATION

0.99+

John furyPERSON

0.99+

third oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

two locationsQUANTITY

0.98+

Zero TrustORGANIZATION

0.98+

eachQUANTITY

0.98+

each cloudQUANTITY

0.98+

one locationQUANTITY

0.98+

Supercloud OSTITLE

0.98+

SupercloudORGANIZATION

0.98+

multicloudORGANIZATION

0.98+

OneQUANTITY

0.97+

firstQUANTITY

0.97+

twoQUANTITY

0.97+

one cloudQUANTITY

0.97+

KitPERSON

0.96+

threeQUANTITY

0.96+

second oneQUANTITY

0.96+

LegosORGANIZATION

0.96+

single cloudQUANTITY

0.95+

five areasQUANTITY

0.95+

DevSecOpsTITLE

0.95+

M&AORGANIZATION

0.94+

KeynoteEVENT

0.92+

past yearDATE

0.91+

two different cloudsQUANTITY

0.9+

todayDATE

0.88+

tens of thousands of themQUANTITY

0.85+

hundreds of thousands ofQUANTITY

0.84+

DevOpsTITLE

0.83+

MulticloudORGANIZATION

0.83+

Securing the Supercloud | Supercloud22


 

>>Okay, welcome back everyone to Supercloud 22, this is the cube studio's live performance. We streaming virtually@siliconangledotcomandthecube.net. I'm John for host the cube at Dave Alane with a distinguished panel talking about securing the Supercloud all cube alumni G written house was the CEO of Skyhigh security, Peter Sharma founder of, of QX sold to tenable and Tony qua who's investor. Co-founder former head of product at VMware chance. Thanks for coming on and to our, in all girls super cloud pilot event. >>Good to see you guys big topic. >>Okay. So before we get into secure in the cloud, one of the things that we were discussing before we came on camera was how cloud, the relationship between cloud and on premise and multi-cloud and how Supercloud fits into that. At the end of the day, security's driving a lot of the conversations at the op side and dev shift left is happening. We see that out there. So before we get into it, how do you guys see super cloud Tony? We'll start with you. We'll go down the line. What is Supercloud to you? >>Well, to me, super cloud is really the next evolution, the culmination of the services coming all together, right? As a application developer today, you really don't need to worry about where this thing is. Sit sitting or what's the latency cuz cuz the internet is fast enough. Now I really wanna know what services something provides. What, how do I get access to it now? Security. We'll talk about that later. That that becomes a, a big issue because of the fragmentation of how security is implemented across all the different vendors. So to me it's an IP address I program to it and you know, off we go, but there's a lot of >>You like that pipe happens >>Iceberg chart, right? Like I'm the developer touching the APIs up there. There's a bunch of other things. BU service. >>Okay. Looking forward again. Gee, what's your take? Obviously we've had many conversations on the cube. What's your super cloud update. >>Yeah, so I, I view it as just an extension of what we see today before like maybe 10 years ago we were mashing up applications built on other SAS applications and whatnot. Now we're just extending that down to further primitives, not, we don't really care where our mashup resides, what cloud platform, where it sits to Tony's point, as long as you have an IP address. But beyond that, we're just gonna start to get little micro services and deeper into the applications. >>BP, what should you take? >>I think, I think super cloud to me is something that don't don't exist. It exists only on my laptop. That's the super cloud means to me. I know it takes a lot behind the scene to get that working of and running. But, but essentially, essentially that the everything having be able to touch physically versus not being able to touch anything is super cloud to me. >>So we, what Victoria was saying. Yeah, we see serverless out there, all these cool things happening. Exactly. And you look at the, some of the successful companies that have come in, I call V two cloud. Some are, some are saying the next gen, they're all building on top of the CapEx. I mean, if, why would you not wanna leverage all that work AWS is doing and now Azure, and obviously Google's out there and you got other, other, other clouds out there. But in terms of AWS as a hyperscaler, they're spending all the money and they're getting better. They're getting lower level. We're talking about some of that yesterday, data bricks, snowflake, Goldman Sachs there's industry clouds that could be powerhouse service providers to themselves and their vertical. Then you got specialty clouds. Like there could be a data cloud, there could be an identity cloud. So yeah. How does this sort itself out? How do you guys see that? Because can they coexist? >>But I think they have to right, because I, I think, you know, eventually organizations will get big enough where they can be strong and really market leading in multiple segments. But if you think about what it takes to really build a massive scaled out database company that, that DNA doesn't just overnight translate to identity or translate to video, it takes years to build that up. So in the meantime, all these guys have to understand that they are one part of the service stack to power the next gen solutions. And if they don't play well with each other, then you're gonna have a problem. >>So security, I think is one of the hardest problems of, of super cloud. And not only do you have too many tools and a lack of talent, but you've now got this new first line of defense, which is the cloud. And the problem is you've got multiple clouds. So you've got multiple first lines of defense with multiple cloud provider tools. And then the CISO, I guess, is the next line of defense with the application development team. You know, there to be the pivot point between strategy and execution. And I guess audit is the third line of the defense. So it's an even more complicated environment. So gee, how do you see that CSO role changing and, and can there actually be a unified security layer in Supercloud? >>Yeah, so I believe that that they can be, the role is definitely changing because now a CSO actually has to have a basic understanding of how clouds work, the dependency of clouds on the, on the business that they serve. And, and this is to your point, not only do we have these new lines and opening up in a tax surface, but they're coupled together. So we have supply chain type connections between this. So there's a coherence across these systems that a CISO has to kind of think about not only these Bo cloud boundaries, but the trust boundaries between them. So classic example visibility, wh what, where are these things and what are the dependencies in my business then of course you mentioned compliance. Am I regulatory? And then of course protecting and responding to this, >>You know? Yeah. The, the, the supply chain piece that you just mentioned. I mean, I feel like there's like these milestones stocks, net was a milestone, you know, obvious obviously log four J was another one, the supply chain hack with solar winds. Yep. You know, it's just, the adversary just keeps getting stronger and stronger and, and, and more agile. So, so is this a data? Do we solve this as a data problem? Is it, you know, you can't just throw more infrastructure at it. What are your thoughts >>For it? I think, you know, great, great point that you're brought up. We need to look at things very fundamentally. What is happening is security has the most difficult job in the cloud, especially super cloud. The poor guys are managing some, managing something or securing something that they can't govern, right? Your, your custodian of the cloud as your developers and DevOps, they are the ones who are defining, creating, destroying things in the cloud. And that guy sitting at the end of the tunnel, looking at things that what he gets and he has to immediately respond. That's why it has to be fundamentally solve. Number one, we talked about supply chain. We talked about the, the, the stuck net to wanna cry, to sort of wins, to know the most recent one on the pipeline. Once the interesting phenomena is that the way industry has moved super cloud, the attackers are also moving them super attackers, right? They have stopped. They have not stopped, but they have started slowly moving to the left, which is the governance part. So they have started attacking your source code, you know, impersonating the codes, replacing the binary, finding one is there. So if they can, if the cloud is built so early, why can't I go early and, and, and inject myself. >>So super hackers is coming to super thinking Hollywood right now. I mean, that brings up a good point. I mean, this whole trust thing is huge. I mean, I hear zero trust. I think, wait a minute, that's not the conference I was just at, we went to, we managed, we work with DockerCon and they were talking about trust services. Yeah. So supply chain source code has trust brokering going on. And yet you got zero trust, which is which are they contextually different? I mean, what, what, >>What, from my perspective, though, the same in that zero trust is a framework that starts with minimum privileges and then build up those privileges over time. Normally in today's dialogue, zero trust is around access. I'm not having a broad access. I'm having a narrow access around an application, but you can also extend those principles to usage. What can, how much privilege do I have within an application? I have to build up my trust to enhance and, and get extended privileges within an application. Of course you can then extend this naturally to applications, APIs, applications, talking with each other. And so by you, you have to restrict the attack surface that is based on a trust model fundamentally. And then to your point, I mean, there's always this residual that you have to deal with afterwards. >>So, so super cloud implies more surface area. You're talking about private. So here we go. So how, and by the way, the AWS was supposed to be at this conference. They said they couldn't make it. They had a schedule issue, but they wanted to be here, but I would ask them, how do you differentiate AWS going forward? Do you go IAS all the way? Do you release the pass layer up? How does this solve? Because you have native clouds that are doing great, the complexity on super cloud, and multi-cloud has to be solved. >>Let me offer maybe a different argument. So if you think about we're all old enough to see the history sort of re pendulum shift and it shifting back in a way, if you're arguing that this culmination of all these services in the form of cloud today, essentially moving up stack, then really this is a architectural pattern that's emerging, right? And therefore there needs to be a super cloud, almost operating system. So operating systems, if you build one before you need a scheduler, you need process handler, you need process isolation, you need memory storage, compute all that together. Now that is our sitting in different parts of the internet. And, and there is no operating system. Yes. And that's the gap, right? And so if you don't even have an operating system, how do you implement security? And that's the pain. Yeah, because today it's one off, directly from service to service. Like how many times can you set up SAML orchestration? You can have an entire team doing that, right. If that's, that's what you have to do. So I think that's ultimately the gap and, and we're sort of just revolving around this concept that there's missing an operating system for superpower. >>It's like Maribel Lopez said in the previous panel that Lord of the rings, there will be no one ring rule the ball. Right. Probably there is needs one. Oh yeah. But, but, but, so what happens? So again, security's the hardest problem. So Snowflake's gotta implement its security, you know, data bricks with an open source model has to implement its security. So there's these multiple security models. You talk about zero trust, which I, if, if I infer what you said, gee, it's essentially, if you don't have privilege access, you don't get access. Yeah. Right. If you, okay. Okay. So that's the framework. Fine. And then you gotta earn it over time. Yeah. Now companies like Amazon, they have the, the talent and the skills to implement that zero trust framework. Exactly. So, so the, the industry, you, you guys with the R and D have to actually ultimately build that, that super cloud framework, don't you? >>Yeah. But I would just look all of the major cloud providers, the ones you mentioned and more will have their own framework within their own environment. Right? Yeah. The problem is with super cloud, you're extending it across multiple ones. There's no standards. There's no easy way to integrate that. So now all of that is left to the developer who is like throwing out code as fast as they can >>Is their, their job is to abstract that, I mean, they've gotta secure the, the run time, they gotta secure the container. >>You have to >>Abstract it. Right. Okay. But, but they're not security pros or ops. >>Exactly. They're haves. >>But to, but to G's point, right. If everyone's implementing their own little Z TNA, then inherently, there's a blind trust between two vendors. Right. That has to >>Be, >>That has to be >>Established. That's implicit. You're saying, >>Yeah. But, but it's, it's contractual, it's not technology. Right. Because I'm turning something out in my cloud, you're turning out something in your cloud that says we've got something, some token exchange, which gives us trust. But what happens if that breaks down and whatever happens to the third party comes in? I think that's the problem. >>Yeah. In fact, in fact, the, if I put the, you know, combine one of those commons, the zero trust was build, keeping identity authentication, then authorization in mind, right? Yeah. This needs to be extended because the zero test definition now probably go into integrity. Yeah, exactly. Right. Yeah. I authenticated. I worked well with Tony in the past, but how do I know that something has changed on the Tony's side? Yeah, exactly. Right, right. That, that integrity is going to be very, very foundational. Given developers are building those third party libraries, those source code pumping stuff. The only way I can validate is, Hey, what has changed? >>And then throw edge into the equation, John and IOT and machine to machine. Exactly. It's just, >>Well, >>Yeah. I think, I think we have another example to build on Tony's operating system model. Okay. And that is the cloud access service broker model for SAS. So we, we have these services sitting out there, we've brokered them together. They're normally on user policies. What I can have access to what I can do, what I can't do, but that can be extended down to services and have the same kind of broker arrangement all through APIs. You have to establish that trust and the, and the policies there, and they can be dynamic and all of this stuff. But you can from an, either an operating system or a SAS interaction and integration model come to these same kind of points. So who >>Builds the, the, the secure Supercloud? Is it new guys like you? Is it your old company giants like Palo Alto? Who, who actually builds the and secures the Supercloud it sounds like it's an ecosystem. >>Yeah. It is an ecosystem. Absolutely. It's an ecosystem. >>Yeah. There's no one security Supercloud >>As well. No, but I, I do think there's one, there's one difference in that historically security has always focused on that shiny object. The, the, the, a particular solution to a particular threat when you're dealing with a, a cloud or super cloud, like the number of that is incalculable. So you have to come into some sort of platform. And so you will see if it's not one, you know, a finite number of platform type solutions that are trying to solve this on behalf of the >>Customer. That to your point, then get connected. >>I think it's gonna be like Unix, right? Like how many flavors of Unix were there out there? All of them 'em had a scheduler. All of them had these processes. All of them had their little compilers. You can compile to that system, target to that system. And for a while, it's gonna be very fragmented until multiple parties decide to converge. >>Right? Well, this is, this is the final question we have one minute left. I wish we had more time. This is a great panel. We'll we'll bring you guys back for sure. After the event, what one thing needs to happen to unify or get through the other side of this fragmentation than the challenges for Supercloud. Because remember the enterprise equation is solve complexity with more complexity. Well, that's not what the market wants. They want simplicity. They want SA they want ease of use. They want infrastructure risk code. What has to happen? What do you think each of you? >>So I, I can start and extending to the previous conversation. I think we need a consortium. We need, we need a framework that defines that if you really want to operate in super cloud, these are the 10 things that you must follow. It doesn't matter whether you take AWS slash or GCP, or you have all, and you will have the on-prem also, which means that it has to follow a pattern. And that pattern is what is required for super cloud. In my opinion, otherwise security is going everywhere. They're like they have to fix everything, find everything and so on. So forth, it's not gonna be possible. So they need a, they need a framework. They need a consortium. And it, this consortium needs to be, I think, needs to led by the cloud providers, because they're the ones who have these foundational infrastructure elements and the security vendor should contribute on providing more severe detections or findings. So that's, in my opinion is, should be the model. >>Well, thank you G >>Yeah, I would think it's more along the lines of a business model we've seen in cloud that the scale matters. And once you're big, you get bigger. We haven't seen that coals around either a vendor, a business model, whatnot, to bring all of this and connect it all together yet. So that value proposition in the industry I think is missing, but there's elements of it already available. >>I, I think there needs to be a mindset. If you look again, history repeating itself, the internet sort of came together around set of I ETF, RSC standards, everybody embraced and extended it. Right. But still there was at least a baseline. Yeah. And I think at that time, the, the largest and most innovative vendors understood that they couldn't do it by themselves. Right. And so I think what we need is a mindset where these big guys like Google, let's take an example. They're not gonna win at all, but they can have a substantial share. So how do they collaborate with the ecosystem around a set of standards so that they can bring, bring their differentiation and then embrace everybody >>Together. Guys, this has been fantastic. I mean, I would just chime in back in the day, those was proprietary nosis proprietary network protocols. You had kind of an enemy to rally around. I'm not sure. I see an enemy out here right now. So the clouds are doing great. Right? So it's a tough one, but I think super OS super consortiums, super business models are gonna emerge. Thanks so much for spending the time. Great conversation. Thank you for having us to bring, keep going hour superclouds here in Palo Alto, live coverage stream virtually I'm John with Dave. Thanks for watching. Stay with us for more coverage. This break.

Published Date : Aug 9 2022

SUMMARY :

I'm John for host the cube at Dave Alane with So before we get into it, how do you guys see super cloud Tony? So to me it's an IP address I program to it Like I'm the developer touching the APIs up there. Gee, what's your take? where it sits to Tony's point, as long as you have an IP address. I know it takes a lot behind the scene to get I mean, if, why would you not wanna leverage all that work But I think they have to right, because I, I think, you know, eventually organizations And I guess audit is the third line of the defense. And then of course protecting and responding to this, Is it, you know, you can't just throw more infrastructure at it. I think, you know, great, great point that you're brought up. So super hackers is coming to super thinking Hollywood right now. And then to your point, I mean, there's always this residual that you have to deal with afterwards. the complexity on super cloud, and multi-cloud has to be solved. So if you think about we're the talent and the skills to implement that zero trust framework. So now all of that is left to the developer They're haves. That has to You're saying, happens to the third party comes in? This needs to be extended because the zero And then throw edge into the equation, John and IOT and machine to machine. And that is the cloud access service broker model for SAS. Is it your old company It's an ecosystem. So you have to come into some sort of platform. That to your point, then get connected. to that system, target to that system. Because remember the enterprise equation is solve complexity with more complexity. So I, I can start and extending to the previous conversation. So So how do they collaborate with the ecosystem around a So the clouds are doing great.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

DavePERSON

0.99+

Maribel LopezPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TonyPERSON

0.99+

Tony quaPERSON

0.99+

Palo AltoLOCATION

0.99+

Peter SharmaPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

two vendorsQUANTITY

0.99+

VictoriaPERSON

0.99+

10 thingsQUANTITY

0.99+

third lineQUANTITY

0.99+

JohnPERSON

0.99+

DockerConORGANIZATION

0.99+

first lineQUANTITY

0.99+

10 years agoDATE

0.99+

todayDATE

0.99+

one minuteQUANTITY

0.99+

Skyhigh securityORGANIZATION

0.98+

first linesQUANTITY

0.98+

oneQUANTITY

0.98+

QXORGANIZATION

0.98+

SupercloudORGANIZATION

0.98+

yesterdayDATE

0.98+

one partQUANTITY

0.97+

zero trustQUANTITY

0.97+

super cloudEVENT

0.97+

Supercloud 22EVENT

0.96+

eachQUANTITY

0.96+

Palo AltoORGANIZATION

0.95+

Dave AlanePERSON

0.93+

virtually@siliconangledotcomandthecube.netOTHER

0.91+

UnixTITLE

0.91+

super cloudORGANIZATION

0.89+

VMwareORGANIZATION

0.89+

AzureTITLE

0.88+

CapExORGANIZATION

0.85+

SASORGANIZATION

0.85+

one differenceQUANTITY

0.83+

Supercloud22ORGANIZATION

0.79+

V two cloudORGANIZATION

0.74+

super OSORGANIZATION

0.71+

one thingQUANTITY

0.7+

zero testQUANTITY

0.67+

ETFOTHER

0.6+

IcebergTITLE

0.59+

CISOORGANIZATION

0.57+

supercloudsORGANIZATION

0.54+

agileTITLE

0.52+

SnowflakeTITLE

0.52+

HollywoodORGANIZATION

0.51+

minuteQUANTITY

0.49+

hardestQUANTITY

0.48+

GCPORGANIZATION

0.42+

SupercloudTITLE

0.41+

DevOpsTITLE

0.4+

slashTITLE

0.34+

Luis Ceze, OctoML | Amazon re:MARS 2022


 

(upbeat music) >> Welcome back, everyone, to theCUBE's coverage here live on the floor at AWS re:MARS 2022. I'm John Furrier, host for theCUBE. Great event, machine learning, automation, robotics, space, that's MARS. It's part of the re-series of events, re:Invent's the big event at the end of the year, re:Inforce, security, re:MARS, really intersection of the future of space, industrial, automation, which is very heavily DevOps machine learning, of course, machine learning, which is AI. We have Luis Ceze here, who's the CEO co-founder of OctoML. Welcome to theCUBE. >> Thank you very much for having me in the show, John. >> So we've been following you guys. You guys are a growing startup funded by Madrona Venture Capital, one of your backers. You guys are here at the show. This is a, I would say small show relative what it's going to be, but a lot of robotics, a lot of space, a lot of industrial kind of edge, but machine learning is the centerpiece of this trend. You guys are in the middle of it. Tell us your story. >> Absolutely, yeah. So our mission is to make machine learning sustainable and accessible to everyone. So I say sustainable because it means we're going to make it faster and more efficient. You know, use less human effort, and accessible to everyone, accessible to as many developers as possible, and also accessible in any device. So, we started from an open source project that began at University of Washington, where I'm a professor there. And several of the co-founders were PhD students there. We started with this open source project called Apache TVM that had actually contributions and collaborations from Amazon and a bunch of other big tech companies. And that allows you to get a machine learning model and run on any hardware, like run on CPUs, GPUs, various GPUs, accelerators, and so on. It was the kernel of our company and the project's been around for about six years or so. Company is about three years old. And we grew from Apache TVM into a whole platform that essentially supports any model on any hardware cloud and edge. >> So is the thesis that, when it first started, that you want to be agnostic on platform? >> Agnostic on hardware, that's right. >> Hardware, hardware. >> Yeah. >> What was it like back then? What kind of hardware were you talking about back then? Cause a lot's changed, certainly on the silicon side. >> Luis: Absolutely, yeah. >> So take me through the journey, 'cause I could see the progression. I'm connecting the dots here. >> So once upon a time, yeah, no... (both chuckling) >> I walked in the snow with my bare feet. >> You have to be careful because if you wake up the professor in me, then you're going to be here for two hours, you know. >> Fast forward. >> The average version here is that, clearly machine learning has shown to actually solve real interesting, high value problems. And where machine learning runs in the end, it becomes code that runs on different hardware, right? And when we started Apache TVM, which stands for tensor virtual machine, at that time it was just beginning to start using GPUs for machine learning, we already saw that, with a bunch of machine learning models popping up and CPUs and GPU's starting to be used for machine learning, it was clear that it come opportunity to run on everywhere. >> And GPU's were coming fast. >> GPUs were coming and huge diversity of CPUs, of GPU's and accelerators now, and the ecosystem and the system software that maps models to hardware is still very fragmented today. So hardware vendors have their own specific stacks. So Nvidia has its own software stack, and so does Intel, AMD. And honestly, I mean, I hope I'm not being, you know, too controversial here to say that it kind of of looks like the mainframe era. We had tight coupling between hardware and software. You know, if you bought IBM hardware, you had to buy IBM OS and IBM database, IBM applications, it all tightly coupled. And if you want to use IBM software, you had to buy IBM hardware. So that's kind of like what machine learning systems look like today. If you buy a certain big name GPU, you've got to use their software. Even if you use their software, which is pretty good, you have to buy their GPUs, right? So, but you know, we wanted to help peel away the model and the software infrastructure from the hardware to give people choice, ability to run the models where it best suit them. Right? So that includes picking the best instance in the cloud, that's going to give you the right, you know, cost properties, performance properties, or might want to run it on the edge. You might run it on an accelerator. >> What year was that roughly, when you were going this? >> We started that project in 2015, 2016 >> Yeah. So that was pre-conventional wisdom. I think TensorFlow wasn't even around yet. >> Luis: No, it wasn't. >> It was, I'm thinking like 2017 or so. >> Luis: Right. So that was the beginning of, okay, this is opportunity. AWS, I don't think they had released some of the nitro stuff that the Hamilton was working on. So, they were already kind of going that way. It's kind of like converging. >> Luis: Yeah. >> The space was happening, exploding. >> Right. And the way that was dealt with, and to this day, you know, to a large extent as well is by backing machine learning models with a bunch of hardware specific libraries. And we were some of the first ones to say, like, know what, let's take a compilation approach, take a model and compile it to very efficient code for that specific hardware. And what underpins all of that is using machine learning for machine learning code optimization. Right? But it was way back when. We can talk about where we are today. >> No, let's fast forward. >> That's the beginning of the open source project. >> But that was a fundamental belief, worldview there. I mean, you have a world real view that was logical when you compare to the mainframe, but not obvious to the machine learning community. Okay, good call, check. Now let's fast forward, okay. Evolution, we'll go through the speed of the years. More chips are coming, you got GPUs, and seeing what's going on in AWS. Wow! Now it's booming. Now I got unlimited processors, I got silicon on chips, I got, everywhere >> Yeah. And what's interesting is that the ecosystem got even more complex, in fact. Because now you have, there's a cross product between machine learning models, frameworks like TensorFlow, PyTorch, Keras, and like that and so on, and then hardware targets. So how do you navigate that? What we want here, our vision is to say, folks should focus, people should focus on making the machine learning models do what they want to do that solves a value, like solves a problem of high value to them. Right? So another deployment should be completely automatic. Today, it's very, very manual to a large extent. So once you're serious about deploying machine learning model, you got a good understanding where you're going to deploy it, how you're going to deploy it, and then, you know, pick out the right libraries and compilers, and we automated the whole thing in our platform. This is why you see the tagline, the booth is right there, like bringing DevOps agility for machine learning, because our mission is to make that fully transparent. >> Well, I think that, first of all, I use that line here, cause I'm looking at it here on live on camera. People can't see, but it's like, I use it on a couple couple of my interviews because the word agility is very interesting because that's kind of the test on any kind of approach these days. Agility could be, and I talked to the robotics guys, just having their product be more agile. I talked to Pepsi here just before you came on, they had this large scale data environment because they built an architecture, but that fostered agility. So again, this is an architectural concept, it's a systems' view of agility being the output, and removing dependencies, which I think what you guys were trying to do. >> Only part of what we do. Right? So agility means a bunch of things. First, you know-- >> Yeah explain. >> Today it takes a couple months to get a model from, when the model's ready, to production, why not turn that in two hours. Agile, literally, physically agile, in terms of walk off time. Right? And then the other thing is give you flexibility to choose where your model should run. So, in our deployment, between the demo and the platform expansion that we announced yesterday, you know, we give the ability of getting your model and, you know, get it compiled, get it optimized for any instance in the cloud and automatically move it around. Today, that's not the case. You have to pick one instance and that's what you do. And then you might auto scale with that one instance. So we give the agility of actually running and scaling the model the way you want, and the way it gives you the right SLAs. >> Yeah, I think Swami was mentioning that, not specifically that use case for you, but that use case generally, that scale being moving things around, making them faster, not having to do that integration work. >> Scale, and run the models where they need to run. Like some day you want to have a large scale deployment in the cloud. You're going to have models in the edge for various reasons because speed of light is limited. We cannot make lights faster. So, you know, got to have some, that's a physics there you cannot change. There's privacy reasons. You want to keep data locally, not send it around to run the model locally. So anyways, and giving the flexibility. >> Let me jump in real quick. I want to ask this specific question because you made me think of something. So we're just having a data mesh conversation. And one of the comments that's come out of a few of these data as code conversations is data's the product now. So if you can move data to the edge, which everyone's talking about, you know, why move data if you don't have to, but I can move a machine learning algorithm to the edge. Cause it's costly to move data. I can move computer, everyone knows that. But now I can move machine learning to anywhere else and not worry about integrating on the fly. So the model is the code. >> It is the product. >> Yeah. And since you said, the model is the code, okay, now we're talking even more here. So machine learning models today are not treated as code, by the way. So do not have any of the typical properties of code that you can, whenever you write a piece of code, you run a code, you don't know, you don't even think what is a CPU, we don't think where it runs, what kind of CPU it runs, what kind of instance it runs. But with machine learning model, you do. So what we are doing and created this fully transparent automated way of allowing you to treat your machine learning models if you were a regular function that you call and then a function could run anywhere. >> Yeah. >> Right. >> That's why-- >> That's better. >> Bringing DevOps agility-- >> That's better. >> Yeah. And you can use existing-- >> That's better, because I can run it on the Artemis too, in space. >> You could, yeah. >> If they have the hardware. (both laugh) >> And that allows you to run your existing, continue to use your existing DevOps infrastructure and your existing people. >> So I have to ask you, cause since you're a professor, this is like a masterclass on theCube. Thank you for coming on. Professor. (Luis laughing) I'm a hardware guy. I'm building hardware for Boston Dynamics, Spot, the dog, that's the diversity in hardware, it's tends to be purpose driven. I got a spaceship, I'm going to have hardware on there. >> Luis: Right. >> It's generally viewed in the community here, that everyone I talk to and other communities, open source is going to drive all software. That's a check. But the scale and integration is super important. And they're also recognizing that hardware is really about the software. And they even said on stage, here. Hardware is not about the hardware, it's about the software. So if you believe that to be true, then your model checks all the boxes. Are people getting this? >> I think they're starting to. Here is why, right. A lot of companies that were hardware first, that thought about software too late, aren't making it. Right? There's a large number of hardware companies, AI chip companies that aren't making it. Probably some of them that won't make it, unfortunately just because they started thinking about software too late. I'm so glad to see a lot of the early, I hope I'm not just doing our own horn here, but Apache TVM, the infrastructure that we built to map models to different hardware, it's very flexible. So we see a lot of emerging chip companies like SiMa.ai's been doing fantastic work, and they use Apache TVM to map algorithms to their hardware. And there's a bunch of others that are also using Apache TVM. That's because you have, you know, an opening infrastructure that keeps it up to date with all the machine learning frameworks and models and allows you to extend to the chips that you want. So these companies pay attention that early, gives them a much higher fighting chance, I'd say. >> Well, first of all, not only are you backable by the VCs cause you have pedigree, you're a professor, you're smart, and you get good recruiting-- >> Luis: I don't know about the smart part. >> And you get good recruiting for PhDs out of University of Washington, which is not too shabby computer science department. But they want to make money. The VCs want to make money. >> Right. >> So you have to make money. So what's the pitch? What's the business model? >> Yeah. Absolutely. >> Share us what you're thinking there. >> Yeah. The value of using our solution is shorter time to value for your model from months to hours. Second, you shrink operator, op-packs, because you don't need a specialized expensive team. Talk about expensive, expensive engineers who can understand machine learning hardware and software engineering to deploy models. You don't need those teams if you use this automated solution, right? Then you reduce that. And also, in the process of actually getting a model and getting specialized to the hardware, making hardware aware, we're talking about a very significant performance improvement that leads to lower cost of deployment in the cloud. We're talking about very significant reduction in costs in cloud deployment. And also enabling new applications on the edge that weren't possible before. It creates, you know, latent value opportunities. Right? So, that's the high level value pitch. But how do we make money? Well, we charge for access to the platform. Right? >> Usage. Consumption. >> Yeah, and value based. Yeah, so it's consumption and value based. So depends on the scale of the deployment. If you're going to deploy machine learning model at a larger scale, chances are that it produces a lot of value. So then we'll capture some of that value in our pricing scale. >> So, you have direct sales force then to work those deals. >> Exactly. >> Got it. How many customers do you have? Just curious. >> So we started, the SaaS platform just launched now. So we started onboarding customers. We've been building this for a while. We have a bunch of, you know, partners that we can talk about openly, like, you know, revenue generating partners, that's fair to say. We work closely with Qualcomm to enable Snapdragon on TVM and hence our platform. We're close with AMD as well, enabling AMD hardware on the platform. We've been working closely with two hyperscaler cloud providers that-- >> I wonder who they are. >> I don't know who they are, right. >> Both start with the letter A. >> And they're both here, right. What is that? >> They both start with the letter A. >> Oh, that's right. >> I won't give it away. (laughing) >> Don't give it away. >> One has three, one has four. (both laugh) >> I'm guessing, by the way. >> Then we have customers in the, actually, early customers have been using the platform from the beginning in the consumer electronics space, in Japan, you know, self driving car technology, as well. As well as some AI first companies that actually, whose core value, the core business come from AI models. >> So, serious, serious customers. They got deep tech chops. They're integrating, they see this as a strategic part of their architecture. >> That's what I call AI native, exactly. But now there's, we have several enterprise customers in line now, we've been talking to. Of course, because now we launched the platform, now we started onboarding and exploring how we're going to serve it to these customers. But it's pretty clear that our technology can solve a lot of other pain points right now. And we're going to work with them as early customers to go and refine them. >> So, do you sell to the little guys, like us? Will we be customers if we wanted to be? >> You could, absolutely, yeah. >> What we have to do, have machine learning folks on staff? >> So, here's what you're going to have to do. Since you can see the booth, others can't. No, but they can certainly, you can try our demo. >> OctoML. >> And you should look at the transparent AI app that's compiled and optimized with our flow, and deployed and built with our flow. That allows you to get your image and do style transfer. You know, you can get you and a pineapple and see how you look like with a pineapple texture. >> We got a lot of transcript and video data. >> Right. Yeah. Right, exactly. So, you can use that. Then there's a very clear-- >> But I could use it. You're not blocking me from using it. Everyone's, it's pretty much democratized. >> You can try the demo, and then you can request access to the platform. >> But you get a lot of more serious deeper customers. But you can serve anybody, what you're saying. >> Luis: We can serve anybody, yeah. >> All right, so what's the vision going forward? Let me ask this. When did people start getting the epiphany of removing the machine learning from the hardware? Was it recently, a couple years ago? >> Well, on the research side, we helped start that trend a while ago. I don't need to repeat that. But I think the vision that's important here, I want the audience here to take away is that, there's a lot of progress being made in creating machine learning models. So, there's fantastic tools to deal with training data, and creating the models, and so on. And now there's a bunch of models that can solve real problems there. The question is, how do you very easily integrate that into your intelligent applications? Madrona Venture Group has been very vocal and investing heavily in intelligent applications both and user applications as well as enablers. So we say an enable of that because it's so easy to use our flow to get a model integrated into your application. Now, any regular software developer can integrate that. And that's just the beginning, right? Because, you know, now we have CI/CD integration to keep your models up to date, to continue to integrate, and then there's more downstream support for other features that you normally have in regular software development. >> I've been thinking about this for a long, long, time. And I think this whole code, no one thinks about code. Like, I write code, I'm deploying it. I think this idea of machine learning as code independent of other dependencies is really amazing. It's so obvious now that you say it. What's the choices now? Let's just say that, I buy it, I love it, I'm using it. Now what do I got to do if I want to deploy it? Do I have to pick processors? Are there verified platforms that you support? Is there a short list? Is there every piece of hardware? >> We actually can help you. I hope we're not saying we can do everything in the world here, but we can help you with that. So, here's how. When you have them all in the platform you can actually see how this model runs on any instance of any cloud, by the way. So we support all the three major cloud providers. And then you can make decisions. For example, if you care about latency, your model has to run on, at most 50 milliseconds, because you're going to have interactivity. And then, after that, you don't care if it's faster. All you care is that, is it going to run cheap enough. So we can help you navigate. And also going to make it automatic. >> It's like tire kicking in the dealer showroom. >> Right. >> You can test everything out, you can see the simulation. Are they simulations, or are they real tests? >> Oh, no, we run all in real hardware. So, we have, as I said, we support any instances of any of the major clouds. We actually run on the cloud. But we also support a select number of edge devices today, like ARMs and Nvidia Jetsons. And we have the OctoML cloud, which is a bunch of racks with a bunch Raspberry Pis and Nvidia Jetsons, and very soon, a bunch of mobile phones there too that can actually run the real hardware, and validate it, and test it out, so you can see that your model runs performant and economically enough in the cloud. And it can run on the edge devices-- >> You're a machine learning as a service. Would that be an accurate? >> That's part of it, because we're not doing the machine learning model itself. You come with a model and we make it deployable and make it ready to deploy. So, here's why it's important. Let me try. There's a large number of really interesting companies that do API models, as in API as a service. You have an NLP model, you have computer vision models, where you call an API and then point in the cloud. You send an image and you got a description, for example. But it is using a third party. Now, if you want to have your model on your infrastructure but having the same convenience as an API you can use our service. So, today, chances are that, if you have a model that you know that you want to do, there might not be an API for it, we actually automatically create the API for you. >> Okay, so that's why I get the DevOps agility for machine learning is a better description. Cause it's not, you're not providing the service. You're providing the service of deploying it like DevOps infrastructure as code. You're now ML as code. >> It's your model, your API, your infrastructure, but all of the convenience of having it ready to go, fully automatic, hands off. >> Cause I think what's interesting about this is that it brings the craftsmanship back to machine learning. Cause it's a craft. I mean, let's face it. >> Yeah. I want human brains, which are very precious resources, to focus on building those models, that is going to solve business problems. I don't want these very smart human brains figuring out how to scrub this into actually getting run the right way. This should be automatic. That's why we use machine learning, for machine learning to solve that. >> Here's an idea for you. We should write a book called, The Lean Machine Learning. Cause the lean startup was all about DevOps. >> Luis: We call machine leaning. No, that's not it going to work. (laughs) >> Remember when iteration was the big mantra. Oh, yeah, iterate. You know, that was from DevOps. >> Yeah, that's right. >> This code allowed for standing up stuff fast, double down, we all know the history, what it turned out. That was a good value for developers. >> I could really agree. If you don't mind me building on that point. You know, something we see as OctoML, but we also see at Madrona as well. Seeing that there's a trend towards best in breed for each one of the stages of getting a model deployed. From the data aspect of creating the data, and then to the model creation aspect, to the model deployment, and even model monitoring. Right? We develop integrations with all the major pieces of the ecosystem, such that you can integrate, say with model monitoring to go and monitor how a model is doing. Just like you monitor how code is doing in deployment in the cloud. >> It's evolution. I think it's a great step. And again, I love the analogy to the mainstream. I lived during those days. I remember the monolithic propriety, and then, you know, OSI model kind of blew it. But that OSI stack never went full stack, and it only stopped at TCP/IP. So, I think the same thing's going on here. You see some scalability around it to try to uncouple it, free it. >> Absolutely. And sustainability and accessibility to make it run faster and make it run on any deice that you want by any developer. So, that's the tagline. >> Luis Ceze, thanks for coming on. Professor. >> Thank you. >> I didn't know you were a professor. That's great to have you on. It was a masterclass in DevOps agility for machine learning. Thanks for coming on. Appreciate it. >> Thank you very much. Thank you. >> Congratulations, again. All right. OctoML here on theCube. Really important. Uncoupling the machine learning from the hardware specifically. That's only going to make space faster and safer, and more reliable. And that's where the whole theme of re:MARS is. Let's see how they fit in. I'm John for theCube. Thanks for watching. More coverage after this short break. >> Luis: Thank you. (gentle music)

Published Date : Jun 24 2022

SUMMARY :

live on the floor at AWS re:MARS 2022. for having me in the show, John. but machine learning is the And that allows you to get certainly on the silicon side. 'cause I could see the progression. So once upon a time, yeah, no... because if you wake up learning runs in the end, that's going to give you the So that was pre-conventional wisdom. the Hamilton was working on. and to this day, you know, That's the beginning of that was logical when you is that the ecosystem because that's kind of the test First, you know-- and scaling the model the way you want, not having to do that integration work. Scale, and run the models So if you can move data to the edge, So do not have any of the typical And you can use existing-- the Artemis too, in space. If they have the hardware. And that allows you So I have to ask you, So if you believe that to be true, to the chips that you want. about the smart part. And you get good recruiting for PhDs So you have to make money. And also, in the process So depends on the scale of the deployment. So, you have direct sales How many customers do you have? We have a bunch of, you know, And they're both here, right. I won't give it away. One has three, one has four. in Japan, you know, self They're integrating, they see this as it to these customers. Since you can see the booth, others can't. and see how you look like We got a lot of So, you can use that. But I could use it. and then you can request But you can serve anybody, of removing the machine for other features that you normally have It's so obvious now that you say it. So we can help you navigate. in the dealer showroom. you can see the simulation. And it can run on the edge devices-- You're a machine learning as a service. know that you want to do, I get the DevOps agility but all of the convenience it brings the craftsmanship for machine learning to solve that. Cause the lean startup No, that's not it going to work. You know, that was from DevOps. double down, we all know the such that you can integrate, and then, you know, OSI on any deice that you Professor. That's great to have you on. Thank you very much. Uncoupling the machine learning Luis: Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Luis CezePERSON

0.99+

QualcommORGANIZATION

0.99+

LuisPERSON

0.99+

2015DATE

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Boston DynamicsORGANIZATION

0.99+

two hoursQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

2017DATE

0.99+

JapanLOCATION

0.99+

Madrona Venture CapitalORGANIZATION

0.99+

AMDORGANIZATION

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

IBMORGANIZATION

0.99+

OneQUANTITY

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

2016DATE

0.99+

University of WashingtonORGANIZATION

0.99+

TodayDATE

0.99+

PepsiORGANIZATION

0.99+

BothQUANTITY

0.99+

yesterdayDATE

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

SecondQUANTITY

0.99+

todayDATE

0.99+

SiMa.aiORGANIZATION

0.99+

OctoMLTITLE

0.99+

OctoMLORGANIZATION

0.99+

IntelORGANIZATION

0.98+

one instanceQUANTITY

0.98+

DevOpsTITLE

0.98+

Madrona Venture GroupORGANIZATION

0.98+

SwamiPERSON

0.98+

MadronaORGANIZATION

0.98+

about six yearsQUANTITY

0.96+

SpotORGANIZATION

0.96+

The Lean Machine LearningTITLE

0.95+

firstQUANTITY

0.95+

theCUBEORGANIZATION

0.94+

ARMsORGANIZATION

0.94+

pineappleORGANIZATION

0.94+

Raspberry PisORGANIZATION

0.92+

TensorFlowTITLE

0.89+

SnapdragonORGANIZATION

0.89+

about three years oldQUANTITY

0.89+

a couple years agoDATE

0.88+

two hyperscaler cloud providersQUANTITY

0.88+

first onesQUANTITY

0.87+

one ofQUANTITY

0.85+

50 millisecondsQUANTITY

0.83+

Apache TVMORGANIZATION

0.82+

both laughQUANTITY

0.82+

three major cloud providersQUANTITY

0.81+

Breaking Analysis: How Snowflake Plans to Make Data Cloud a De Facto Standard


 

>>From the cube studios in Palo Alto, in Boston, bringing you data driven insights from the cube and ETR. This is breaking analysis with Dave ante. >>When Frank sluman took service, now public many people undervalued the company, positioning it as just a better help desk tool. You know, it turns out that the firm actually had a massive Tam expansion opportunity in it. SM customer service, HR, logistics, security marketing, and service management. Generally now stock price followed over the years, the stellar execution under Slootman and CFO, Mike scar Kelly's leadership. Now, when they took the reins at snowflake expectations were already set that they'd repeat the feet, but this time, if anything, the company was overvalued out of the gate, the thing is people didn't really better understand the market opportunity this time around, other than that, it was a bet on Salman's track record of execution and on data, pretty good bets, but folks really didn't appreciate that snowflake. Wasn't just a better data warehouse that it was building what they call a data cloud, and we've turned a data super cloud. >>Hello and welcome to this. Week's Wikibon cube insights powered by ETR in this breaking analysis, we'll do four things. First. We're gonna review the recent narrative and concerns about snowflake and its value. Second, we're gonna share survey data from ETR that will confirm precisely what the company's CFO has been telling anyone who will listen. And third, we're gonna share our view of what snowflake is building IE, trying to become the defacto standard data platform, and four convey our expectations for the upcoming snowflake summit. Next week at Caesar's palace in Las Vegas, Snowflake's most recent quarterly results they've been well covered and well documented. It basically hit its targets, which for snowflake investors was bad news wall street piled on expressing concerns about Snowflake's consumption, pricing model, slowing growth rates, lack of profitability and valuation. Given the, given the current macro market conditions, the stock dropped below its IPO offering price, which you couldn't touch on day one, by the way, as the stock opened well above that and, and certainly closed well above that price of one 20 and folks express concerns about some pretty massive insider selling throughout 2021 and early 2022, all this caused the stock price to drop quite substantially. >>And today it's down around 63% or more year to date, but the only real substantive change in the company's business is that some of its largest consumer facing companies, while still growing dialed back, their consumption this past quarter, the tone of the call was I wouldn't say contentious the earnings call, but Scarelli, I think was getting somewhat annoyed with the implication from some analyst questions that something is fundamentally wrong with Snowflake's business. So let's unpack this a bit first. I wanna talk about the consumption pricing on the earnings call. One of the analysts asked if snowflake would consider more of a subscription based model so that they could better weather such fluctuations and demand before the analyst could even finish the question, CFO Scarelli emphatically interrupted and said, no, <laugh> the analyst might as well have asked, Hey Mike, have you ever considered changing your pricing model and screwing your customers the same way most legacy SaaS companies lock their customers in? >>So you could squeeze more revenue out of them and make my forecasting life a little bit easier. <laugh> consumption pricing is one of the things that makes a company like snowflake so attractive because customers is especially large customers facing fluctuating demand can dial and their end demand can dial down usage for certain workloads that are maybe not yet revenue producing or critical. Now let's jump to insider trading. There were a lot of insider selling going on last year and into 2022 now, I mean a lot sloop and Scarelli Christine Kleinman. Mike SP several board members. They sold stock worth, you know, many, many hundreds of millions of dollars or, or more at prices in the two hundreds and three hundreds and even four hundreds. You remember the company at one point was valued at a hundred billion dollars, surpassing the value of service now, which is this stupid at this point in the company's tenure and the insider's cost basis was very often in the single digit. >>So on the one hand, I can't blame them. You know what a gift the market gave them last year. Now also famed investor, Peter Linsey famously said, insiders sell for many reasons, but they only buy for one. But I have to say there wasn't a lot of insider buying of the stock when it was in the three hundreds and above. And so yeah, this pattern is something to watch our insiders buying. Now, I'm not sure we'll keep watching snowflake. It's pretty generous with stock based compensation and insiders still own plenty of stock. So, you know, maybe not, but we'll see in future disclosures, but the bottom line is Snowflake's business. Hasn't dramatically changed with the exception of these large consumer facing companies. Now, another analyst pointed out that companies like snap, he pointed to company snap, Peloton, Netflix, and face Facebook have been cutting back. >>And Scarelli said, and what was a bit of a surprise to me? Well, I'm not gonna name the customers, but it's not the ones you mentioned. So I, I thought I would've, you know, if I were the analyst I would've follow up with, how about Walmart target visa, Amex, Expedia price line, or Uber? Any of those Mike? I, I doubt he would've answered me anything. Anyway, the one thing that Scarelli did do is update Snowflake's fiscal year 2029 outlook to emphasize the long term opportunity that the company sees. This chart shows a financial snapshot of Snowflake's current business using a combination of quarterly and full year numbers in a model of what the business will look like. According to Scarelli in Dave ante with a little bit of judgment in 2029. So this is essentially based on the company's framework. Snowflake this year will surpass 2 billion in revenues and targeting 10 billion by 2029. >>Its current growth rate is 84% and its target is 30% in the out years, which is pretty impressive. Gross margins are gonna tick up a bit, but remember Snowflake's cost a good sold they're dominated by its cloud cost. So it's got a governor. There has to pay AWS Azure and Google for its infrastructure. But high seventies is a, is a good target. It's not like the historical Microsoft, you know, 80, 90% gross margin. Not that Microsoft is there anymore, but, but snowflake, you know, was gonna be limited by how far it can, how much it can push gross margin because of that factor. It's got a tiny operating margin today and it's targeting 20% in 2029. So that would be 2 billion. And you would certainly expect it's operating leverage in the out years to enable much, much, much lower SGNA than the current 54%. I'm guessing R and D's gonna stay healthy, you know, coming in at 15% or so. >>But the real interesting number to watch is free cash flow, 16% this year for the full fiscal year growing to 25% by 2029. So 2.5 billion in free cash flow in the out years, which I believe is up from previous Scarelli forecast in that 10, you know, out year view 2029 view and expect the net revenue retention, the NRR, it's gonna moderate. It's gonna come down, but it's still gonna be well over a hundred percent. We pegged it at 130% based on some of Mike's guidance. Now today, snowflake and every other stock is well off this morning. The company had a 40 billion value would drop well below that midday, but let's stick with the 40 billion on this, this sad Friday on the stock market, we'll go to 40 billion and who knows what the stock is gonna be valued in 2029? No idea, but let's say between 40 and 200 billion and look, it could get even ugly in the market as interest rates rise. >>And if inflation stays high, you know, until we get a Paul Voker like action, which is gonna be painful from the fed share, you know, let's hope we don't have a repeat of the long drawn out 1970s stagflation, but that is a concern among investors. We're gonna try to keep it positive here and we'll do a little sensitivity analysis of snowflake based on Scarelli and Ante's 2029 projections. What we've done here is we've calculated in this chart. Today's current valuation at about 40 billion and run a CAGR through 2029 with our estimates of valuation at that time. So if it stays at 40 billion valuation, can you imagine snowflake grow into a 10 billion company with no increase in valuation by the end, by by 2029 fiscal 2029, that would be a major bummer and investors would get a, a 0% return at 50 billion, 4% Kager 60 billion, 7%. >>Kegar now 7% market return is historically not bad relative to say the S and P 500, but with that kind of revenue and profitability growth projected by snowflake combined with inflation, that would again be a, a kind of a buzzkill for investors. The picture at 75 billion valuation, isn't much brighter, but it picks up at, at a hundred billion, even with inflation that should outperform the market. And as you get to 200 billion, which would track by the way, revenue growth, you get a 30% plus return, which would be pretty good. Could snowflake beat these projections. Absolutely. Could the market perform at the optimistic end of the spectrum? Sure. It could. It could outperform these levels. Could it not perform at these levels? You bet, but hopefully this gives a little context and framework to what Scarelli was talking about and his framework, not with notwithstanding the market's unpredictability you're you're on your own. >>There. I can't help snowflake looks like it's going to continue either way in amazing run compared to other software companies historically, and whether that's reflected in the stock price. Again, I, I, I can't predict, okay. Let's look at some ETR survey data, which aligns really well with what snowflake is telling the street. This chart shows the breakdown of Snowflake's net score and net score. Remember is ETS proprietary methodology that measures the percent of customers in their survey that are adding the platform new. That's the lime green at 19% existing snowflake customers that are ex spending 6% or more on the platform relative to last year. That's the forest green that's 55%. That's a big number flat spend. That's the gray at 21% decreasing spending. That's the pinkish at 5% and churning that's the red only 1% or, or moving off the platform, tiny, tiny churn, subtract the red from the greens and you get a net score that, that, that nets out to 68%. >>That's an, a very impressive net score by ETR standards. But it's down from the highs of the seventies and mid eighties, where high seventies and mid eighties, where snowflake has been since January of 2019 note that this survey of 1500 or so organizations includes 155 snowflake customers. What was really interesting is when we cut the data by industry sector, two of Snowflake's most important verticals, our finance and healthcare, both of those sectors are holding a net score in the ETR survey at its historic range. 83%. Hasn't really moved off that, you know, 80% plus number really encouraging, but retail consumer showed a dramatic decline. This past survey from 73% in the previous quarter down to 54%, 54% in just three months time. So this data aligns almost perfectly with what CFO Scarelli has been telling the street. So I give a lot of credibility to that narrative. >>Now here's a time series chart for the net score and the provision in the data set, meaning how penetrated snowflake is in the survey. Again, net score measures, spending velocity and a specific platform and provision measures the presence in the data set. You can see the steep downward trend in net score this past quarter. Now for context note, the red dotted line on the vertical axis at 40%, that's a bit of a magic number. Anything above that is best in class in our view, snowflake still a well, well above that line, but the April survey as we reported on May 7th in quite a bit of detail shows a meaningful break in the snowflake trend as shown by ETRS call out on the bottom line. You can see a steady rise in the survey, which is a proxy for Snowflake's overall market penetration. So steadily moving up and up. >>Here's a bit of a different view on that data bringing in some of Snowflake's peers and other data platforms. This XY graph shows net score on the vertical axis and provision on the horizontal with the red dotted line. At 40%, you can see from the ETR callouts again, that snowflake while declining in net score still holds the highest net score in the survey. So of course the highest data platforms while the spending velocity on AWS and Microsoft, uh, data platforms, outperforms that have, uh, sorry, while they're spending velocity on snowflake outperforms, that of AWS and, and Microsoft data platforms, those two are still well above the 40% line with a stronger market presence in the category. That's impressive because of their size. And you can see Google cloud and Mongo DB right around the 40% line. Now we reported on Mongo last week and discussed the commentary on consumption models. >>And we referenced Ray Lenchos what we thought was, was quite thoughtful research, uh, that rewarded Mongo DB for its forecasting transparency and, and accuracy and, and less likelihood of facing consumption headwinds. And, and I'll reiterate what I said last week, that snowflake, while seeing demand fluctuations this past quarter from those large customers is, is not like a data lake where you're just gonna shove data in and figure it out later, no schema on, right. Just throw it into the pond. That's gonna be more discretionary and you can turn that stuff off. More likely. Now you, you bring data into the snowflake data cloud with the intent of driving insights, which leads to actions, which leads to value creation. And as snowflake adds capabilities and expands its platform features and innovations and its ecosystem more and more data products are gonna be developed in the snowflake data cloud and by data products. >>We mean products and services that are conceived by business users. And that can be directly monetized, not just via analytics, but through governed data sharing and direct monetization. Here's a picture of that opportunity as we see it, this is our spin on our snowflake total available market chart that we've published many, many times. The key point here goes back to our opening statements. The snowflake data cloud is evolving well beyond just being a simpler and easier to use and more elastic cloud database snowflake is building what we often refer to as a super cloud. That is an abstraction layer that companies that, that comprises rich features and leverages the underlying primitives and APIs of the cloud providers, but hides all that complexity and adds new value beyond that infrastructure that value is seen in the left example in terms of compressed cycle time, snowflake often uses the example of pharmaceutical companies compressing time to discover a drug by years. >>Great example, there are many others this, and, and then through organic development and ecosystem expansion, snowflake will accelerate feature delivery. Snowflake's data cloud vision is not about vertically integrating all the functionality into its platform. Rather it's about creating a platform and delivering secure governed and facile and powerful analytics and data sharing capabilities to its customers, partners in a broad ecosystem so they can create additional value. On top of that ecosystem is how snowflake fills the gaps in its platform by building the best cloud data platform in the world, in terms of collaboration, security, governance, developer, friendliness, machine intelligence, etcetera, snowflake believes and plans to create a defacto standard. In our view in data platforms, get your data into the data cloud and all these native capabilities will be available to you. Now, is that a walled garden? Some might say it is. It's an interesting question and <laugh>, it's a moving target. >>It's definitely proprietary in the sense that snowflake is building something that is highly differentiatable and is building a moat around it. But the more open snowflake can make its platform. The more open source it uses, the more developer friendly and the great greater likelihood people will gravitate toward snowflake. Now, my new friend Tani, she's the creator of the data mesh concept. She might bristle at this narrative in favor, a more open source version of what snowflake is trying to build, but practically speaking, I think she'd recognize that we're a long ways off from that. And I also think that the benefits of a platform that despite requiring data to be inside of the data cloud can distribute data globally, enable facile governed, and computational data sharing, and to a large degree be a self-service platform for data, product builders. So this is how we see snow, the snowflake data cloud vision evolving question is edge part of that vision on the right hand side. >>Well, again, we think that is going to be a future challenge where the ecosystem is gonna have to come to play to fill those gaps. If snowflake can tap the edge, it'll bring even more clarity as to how it can expand into what we believe is a massive 200 billion Tam. Okay, let's close on next. Week's snowflake summit in Las Vegas. The cube is very excited to be there. I'll be hosting with Lisa Martin and we'll have Frank son as well as Christian Kleinman and several other snowflake experts. Analysts are gonna be there, uh, customers. And we're gonna have a number of ecosystem partners on as well. Here's what we'll be looking for. At least some of the things, evidence that our view of Snowflake's data cloud is actually taking shape and evolving in the way that we showed on the previous chart, where we also wanna figure out where snowflake is with it. >>Streamlet acquisition. Remember streamlet is a data science play and an expansion into data, bricks, territory, data, bricks, and snowflake have been going at it for a while. Streamlet brings an open source Python library and machine learning and kind of developer friendly data science environment. We also expect to hear some discussion, hopefully a lot of discussion about developers. Snowflake has a dedicated developer conference in November. So we expect to hear more about that and how it's gonna be leveraging further leveraging snow park, which it has previously announced, including a public preview of programming for unstructured data and data monetization along the lines of what we suggested earlier that is building data products that have the bells and whistles of native snowflake and can be directly monetized by Snowflake's customers. Snowflake's already announced a new workload this past week in security, and we'll be watching for others. >>And finally, what's happening in the all important ecosystem. One of the things we noted when we covered service now, cause we use service now as, as an example because Frank Lupin and Mike Scarelli and others, you know, DNA were there and they're improving on that service. Now in his post IPO, early adult years had a very slow pace. In our view was often one of our criticism of ecosystem development, you know, ServiceNow. They had some niche SI uh, like cloud Sherpa, and eventually the big guys came in and, and, and began to really lean in. And you had some other innovators kind of circling the mothership, some smaller companies, but generally we see sluman emphasizing the ecosystem growth much, much more than with this previous company. And that is a fundamental requirement in our view of any cloud or modern cloud company now to paraphrase the crazy man, Steve bomber developers, developers, developers, cause he screamed it and ranted and ran around the stage and was sweating <laugh> ecosystem ecosystem ecosystem equals optionality for developers and that's what they want. >>And that's how we see the current and future state of snowflake. Thanks today. If you're in Vegas next week, please stop by and say hello with the cube. Thanks to my colleagues, Stephanie Chan, who sometimes helps research breaking analysis topics. Alex, my is, and OS Myerson is on production. And today Andrew Frick, Sarah hiney, Steven Conti Anderson hill Chuck all and the entire team in Palo Alto, including Christian. Sorry, didn't mean to forget you Christian writer, of course, Kristin Martin and Cheryl Knight, they helped get the word out. And Rob ho is our E IIC over at Silicon angle. Remember, all these episodes are available as podcast, wherever you listen to search breaking analysis podcast, I publish each week on wikibon.com and Silicon angle.com. You can email me directly anytime David dot Valante Silicon angle.com. If you got something interesting, I'll respond. If not, I won't or DM me@deteorcommentonmylinkedinpostsandpleasedocheckoutetr.ai for the best survey data in the enterprise tech business. This is Dave Valante for the insights powered by ETR. Thanks for watching. And we'll see you next week. I hope if not, we'll see you next time on breaking analysis.

Published Date : Jun 10 2022

SUMMARY :

From the cube studios in Palo Alto, in Boston, bringing you data driven insights from the if anything, the company was overvalued out of the gate, the thing is people didn't We're gonna review the recent narrative and concerns One of the analysts asked if snowflake You remember the company at one point was valued at a hundred billion dollars, of the stock when it was in the three hundreds and above. but it's not the ones you mentioned. It's not like the historical Microsoft, you know, But the real interesting number to watch is free cash flow, 16% this year for And if inflation stays high, you know, until we get a Paul Voker like action, the way, revenue growth, you get a 30% plus return, which would be pretty Remember is ETS proprietary methodology that measures the percent of customers in their survey that in the previous quarter down to 54%, 54% in just three months time. You can see a steady rise in the survey, which is a proxy for Snowflake's overall So of course the highest data platforms while the spending gonna be developed in the snowflake data cloud and by data products. that comprises rich features and leverages the underlying primitives and APIs fills the gaps in its platform by building the best cloud data platform in the world, friend Tani, she's the creator of the data mesh concept. and evolving in the way that we showed on the previous chart, where we also wanna figure out lines of what we suggested earlier that is building data products that have the bells and One of the things we noted when we covered service now, cause we use service now as, This is Dave Valante for the insights powered

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Stephanie ChanPERSON

0.99+

Cheryl KnightPERSON

0.99+

Peter LinseyPERSON

0.99+

Christian KleinmanPERSON

0.99+

Kristin MartinPERSON

0.99+

Sarah hineyPERSON

0.99+

Dave ValantePERSON

0.99+

SalmanPERSON

0.99+

AlexPERSON

0.99+

Mike ScarelliPERSON

0.99+

FrankPERSON

0.99+

VegasLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

AprilDATE

0.99+

ScarelliPERSON

0.99+

WalmartORGANIZATION

0.99+

May 7thDATE

0.99+

Andrew FrickPERSON

0.99+

Palo AltoLOCATION

0.99+

2029DATE

0.99+

30%QUANTITY

0.99+

40 billionQUANTITY

0.99+

84%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

75 billionQUANTITY

0.99+

2 billionQUANTITY

0.99+

AWSORGANIZATION

0.99+

55%QUANTITY

0.99+

10 billionQUANTITY

0.99+

NetflixORGANIZATION

0.99+

21%QUANTITY

0.99+

Las VegasLOCATION

0.99+

January of 2019DATE

0.99+

NovemberDATE

0.99+

19%QUANTITY

0.99+

40%QUANTITY

0.99+

TaniPERSON

0.99+

GoogleORGANIZATION

0.99+

MikePERSON

0.99+

68%QUANTITY

0.99+

54%QUANTITY

0.99+

last yearDATE

0.99+

200 billionQUANTITY

0.99+

FacebookORGANIZATION

0.99+

80%QUANTITY

0.99+

15%QUANTITY

0.99+

5%QUANTITY

0.99+

6%QUANTITY

0.99+

last weekDATE

0.99+

7%QUANTITY

0.99+

20%QUANTITY

0.99+

BostonLOCATION

0.99+

Frank LupinPERSON

0.99+

83%QUANTITY

0.99+

Next weekDATE

0.99+

next weekDATE

0.99+

TodayDATE

0.99+

Frank slumanPERSON

0.99+

2.5 billionQUANTITY

0.99+

SlootmanPERSON

0.99+

16%QUANTITY

0.99+

73%QUANTITY

0.99+

todayDATE

0.99+

2022DATE

0.99+

FridayDATE

0.99+

1970sDATE

0.99+

two hundredsQUANTITY

0.99+

130%QUANTITY

0.99+

Francis Chow, Red Hat | Red Hat Summit 2022


 

>> We're back at the Seaport in Boston. Dave Vellante and Paul Gill. You're watching The Cubes coverage of Red Hat Summit, 2022. A little different this year, a smaller venue. Maybe a thousand people. Love the keynotes, compressed. Big virtual audience. So we're happy to be coming to you live, face to face. It's been a while since we've had these, for a lot of folks, this is their first in person event. You know, it's kind of weird getting used to that, but I think in the next few months, it's going to become the new, sort of quasi abnormal. Francis Chow is here. He's the Vice President and GM of In-Vehicle OS and Edge at Red Hat. Francis, welcome. That's the most interesting title we've had all week. So thanks for coming here. >> Thank you, Dave. Thank you, Paul, for having me here. >> So The Edge, I mean The Edge is, we heard about the International Space Station. We heard about ski boots, of course In-Vehicle. What's the Edge to you? >> Well, to me Edge actually could mean many different things, right? The way we look at Edge is, there is the traditional enterprise Edge, where this is the second tier, third tier data centers that this extension from your core, the network and your centralized data center, right to remote locations. And then there are like Telco Edge, right? where we know about the 5G network, right Where you deploy bay stations and which would have a different size of requirements right. Of traditional enterprise edge networks. And then there are Operational Edge where we see the line of business operating on those locations, right? Things like manufacturing for oil rigs, retail store, right? So very wide variety of Edge that are doing OT type of technology, and then last but not least there is the customer on or kind of device edge where we now putting things into things like cars, as you said, like ski booth, and have that interaction with the end consumers. >> Is this why? I mean, there's a lot of excitement at Red. I could tell among the Red hat people about this GM deal here is this why that's so exciting to them? This really encompasses sort of all of those variants of the edge in automotive, in automobile experience. Doesn't it? >> I think why this is exciting to the industry and also to us is that if you look at traditionally how automotive has designed, right the way the architect vehicle today has many subsystems, they are all purpose viewed, very tight cut, coupled with hardware and software. And it's very difficult to reuse, right? So their cause of development is high. The time to develop is long and adding to that there is a lengthy safety certification process which also kind of make it hard. Because every time you make a change in the system you have to re-certify it again. >> Right. >> And typically it takes about six to 12 months to do so. Every time you make a change. So very lengthy passes, which is important because we want to ensure occupants are safe in a vehicle. Now what we bring to the table, which I think is super exciting is we bring this platform approach. Now you can use a consistent platform that is open and you can actually now run multiple doming applications on the same platform which means automakers can reuse components across model years and brands. That will lower the development cost. Now I think one of the key things that we bring to the table is that we introduce a new safety certification approach called Continuous Safety Certification. We actually announced that in our summit last year with the intent, "Hey, we're going to deliver this functional certified Linux platform" Which is the first four Linux. And the way we do it is we work with our partner Excedr to try to define that approach. And at the high level the idea really is to automate that certification process just like how we automate software development. Right, we are adding that monitoring capabilities with functional safety related artifacts in our CI three pipeline. And we are able to aim to cut back that kind of certification time to a fraction of what is needed today. So what we can do, I think with this collaboration with GM, is help them get faster time to market, and then lower development costs. Now, adding to that, if you think about a modern Linux platform, you can update it over the air, right? This is the capability that we are working with GM as well. Now what customers can expect now, right for future vehicle is there will be updates on apps and services, just like your cell phone, right. Which makes your car more capable over time and more relevant for the long term. >> So there's some assumptions you're making at the edge. First of all, you described a spectrum retail store which you know, to me, okay, it's Edge, but you can take an X-86 box or a hyper converged infrastructure throw it in there. And there's some opportunities to do some stuff in real time, but it's kind of an extension natural extension of IT. Whereas in vehicle you got to make some assumptions spotty connectivity to do software download and you can't do truck rolls at the far edge, right? None of that is okay, and so there's some assumptions there and as you say, your role is to compress the time to market, but also deliver a better consumer >> Absolutely. >> Experience, so what can we expect? You started to talk about the future of in vehicle, you know, or EVs, if you will, what should we expect as consumers? You, you're saying over the year software we're seeing that with some of the EV makers, for sure. But what's the future look like? >> I think what consumers can expect is really over a period of time, right? A similar experience, like what you have with your mobile mobile device, right? If you look back 15, 20 years, right? You buy a phone, right? That's the feature that you have with your phone, right? No update, it is what it is right, for the lifetime of the product which is pretty much what you have now, if you buy a vehicle, right. You have those features capabilities and you allow it for the lifetime of the vehicle. >> Sometimes you have to drive in for a maintenance, a service to get a software update. >> We can talk about that too right. But as we make the systems, update-able right you can now expect more frequent and seamless update of both the operating system and the application services that sit on top of that. Right, so I think right in the future consumers can expect more capable vehicles after you purchase it because new developmental software can now be done with an update over the air. >> I assume this relationship with GM is not exclusive. Are you talking with other automakers as well? >> We are talking to auto makers, other auto makers. What we working with GM is really a product that could work for the industry, right? This is actually what we both believe in is the right thing to do right? As we are able to standardize how we approach the infrastructure. I think this is a good thing for the whole industry to help accelerate innovation for the entire industry. >> Well which is sort of natural next question. Are we heading toward an open automotive platform? Like we have an open banking platform in that industry. Do you see the possibility that there could be a single platform that all or most of the auto makers will work on? >> I wouldn't use the word single, but I definitely would use the word open. Right? Our goal is to build this open platform, right. Because we believe in open source, right. We believe in community, right. If we make it open, we have more contributors to come in and help to make the system better in a way faster. And actually like you said, right. Improve the quality, right, better. Right, so that the chance of recall is now lower with, with this approach. >> You're using validated patterns as part of this initiative. Is that right? And what is a validated pattern? How is it different from a reference architecture? Is it just kind of a new name for reference architecture? or what value does it bring to the relation? >> For automotive right, we don't have a validated pattern yet but they can broadly kind of speak about what that is. >> Yeah. >> And how we see that evolve over time. So validated pattern basically is a combination of Red Hat products, multiple Red Hat products and partner products. And we usually build it for specific use case. And then we put those components together run rigorous tests to validate it that's it going to work, so that it becomes more repeatable and deployable for those particular edge use cases. Now we do work with our partners to make it happen, right. Because in the end, right we want to make a solution that is about 80% of the way and allow our partners to kind of add more value and their secret sauce on top and deploy it. Right, and I'll give you kind of one example, right You just have the interview with the Veterans Affairs team, right. One of our patents, right? The Medical Diagnosis Pattern, right. Actually we work with them in the early development stage of that. Right, what it does is to help make assessments on pneumonia with chest X rates, right. So it's a fully automated data pipeline. We get the chest x-ray from an object store use AIML to diagnose whether there's new pneumonia. And then I'll put that in a dashboard automated with the validated pattern. >> So you're not using them today, but can we expect that in the future? It sounds like >> Yes absolutely it's in the works, yes. >> It would be a perfect vertical. >> How do you believe your work with GM? I mean, has implications across Red hat? It seems like there are things you're going to be doing with GM that could affect other parts of your own product portfolio. >> Oh, absolutely. I think this actually is, it's a pivotal moment for Red Hat and the automotive industry. And I think broadly speaking for any safety conscious industry, right. As we create this Proof-point right that we can build a Linux system that is optimized for footprint performance, realtime capabilities, and be able to certify it for safety. Right I think all the adjacent industry, right. You think about transportation, healthcare, right. Industry that have tight safety requirements. It's just opened up the aperture for us to adjust those markets in the future. >> So we talked about a lot about the consumerization of IT over the last decade. Many of us feel as though that what's going on at the Edge, the innovations that are going on at the Edge realtime AI inferencing, you know, streaming data ARM, the innovations that ARM and others are performing certainly in video until we heard today, this notion of, you know, no touch, zero touch provisioning that a lot of these innovations are actually going to find their way into the enterprise. Kind of a follow on fault of what you were just talking about. And there's probably some future disruptions coming. You can almost guarantee that, I mean, 15 years or so we get that kind of disruption. How are you thinking about that? >> Well, I think you company, right. Some of the Edge innovation, right. You're going to kind of bring back to enterprise over time. Right but the one thing that you talk about zero touch provisioning right. Is critical right? You think about edge deployments. You're going to have to deal with a very diverse set of environments on how deployments are happen. Right think about like tail code based stations, right. You have somewhere between 75,000 to 100,000 base stations in the US for each provider right. How do you deploy it? Right, if you let's say you push one update or you want the provision system. So what we bring to the table in the latest open shift release is that, hey we make provisioning zero touch right, meaning you can actually do that without any menu intervention. >> Yeah, so I think the Edge is going to raise the bar for the enterprise, I guess is my premise there. >> Absolutely. >> So Francis, thanks so much for coming on The Cube. It's great to see you and congratulations on the collaboration. It's a exciting area for you guys. >> Thank you again, Dave and Paul. >> Our pleasure, all right keep it right there. After this quick break, we'll be back. Paul Gill and Dave Vellante you're watching The Cubes coverage Red Hat Summit 2022 live from the Boston Seaport. Be right back.

Published Date : May 11 2022

SUMMARY :

to you live, face to face. Thank you, Dave. What's the Edge to you? the line of business operating of the edge in automotive, and also to us is that if you look And the way we do it is we work First of all, you described of the EV makers, for sure. That's the feature that you Sometimes you have to drive in and the application services Are you talking with in is the right thing to do right? or most of the auto makers will work on? Right, so that the chance of recall bring to the relation? kind of speak about what that is. of the way and allow our partners How do you believe your work with GM? for Red Hat and the automotive industry. that are going on at the Edge Right but the one thing that you talk is going to raise the bar It's great to see you and congratulations Summit 2022 live from the Boston Seaport.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillPERSON

0.99+

PaulPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

FrancisPERSON

0.99+

Francis ChowPERSON

0.99+

15QUANTITY

0.99+

USLOCATION

0.99+

last yearDATE

0.99+

15 yearsQUANTITY

0.99+

GMORGANIZATION

0.99+

Boston SeaportLOCATION

0.99+

BostonLOCATION

0.99+

each providerQUANTITY

0.99+

bothQUANTITY

0.99+

75,000QUANTITY

0.98+

LinuxTITLE

0.98+

second tierQUANTITY

0.98+

Red HatORGANIZATION

0.98+

20 yearsQUANTITY

0.98+

RedORGANIZATION

0.98+

Red Hat SummitEVENT

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

Red Hat Summit 2022EVENT

0.97+

Veterans AffairsORGANIZATION

0.97+

one exampleQUANTITY

0.97+

ARMORGANIZATION

0.97+

zero touchQUANTITY

0.97+

single platformQUANTITY

0.97+

100,000 base stationsQUANTITY

0.96+

Telco EdgeORGANIZATION

0.96+

last decadeDATE

0.95+

singleQUANTITY

0.95+

ExcedrORGANIZATION

0.95+

about 80%QUANTITY

0.94+

this yearDATE

0.94+

about sixQUANTITY

0.94+

12 monthsQUANTITY

0.93+

one thingQUANTITY

0.92+

one updateQUANTITY

0.92+

Red Hat Summit 2022EVENT

0.91+

FirstQUANTITY

0.91+

first fourQUANTITY

0.9+

EdgeORGANIZATION

0.89+

SeaportLOCATION

0.89+

zeroQUANTITY

0.87+

X-86COMMERCIAL_ITEM

0.87+

International Space StationLOCATION

0.87+

The CubeORGANIZATION

0.81+

The EdgeORGANIZATION

0.81+

2022DATE

0.8+

Red hatTITLE

0.79+

thousand peopleQUANTITY

0.79+

first inQUANTITY

0.74+

third tierQUANTITY

0.72+

Red HatTITLE

0.69+

In-Vehicle OSORGANIZATION

0.69+

5GOTHER

0.68+

Continuous Safety CertificationOTHER

0.67+

The CubesORGANIZATION

0.66+

next few monthsDATE

0.62+

The CubesTITLE

0.41+

Gunnar Hellekson, Red Hat | Red Hat Summit 2022


 

(upbeat music) >> Welcome back to Boston, Massachusetts. We're here at the Seaport. You're watching theCUBE's coverage of Red Hat Summit 2022. My name is Dave Vellante and Paul Gillin is here. He's my cohost for the next day. We are going to dig in to the famous RHEL, Red Hat Enterprise Linux. Gunnar Hellekson is here, he's the Vice President and General Manager of Red Hat Enterprise Linux. Gunnar, welcome to theCUBE. Good to see you. >> Thanks for having me. Nice to be here, Dave, Paul. >> RHEL 9 is, wow, nine, Holy cow. It's been a lot of iterations. >> It's the highest version of RHEL we've ever shipped. >> And now we're talking edge. >> Yeah, that's right. >> And so, what's inside, tell us. to keep happy with a new RHEL release. to keep happy with a new RHEL release. The first is the hardware partners, right, because they rely on RHEL to light up all their delicious hardware that they're making, then you got application developers and the ISVs who rely on RHEL to be that kind of stable platform for innovation, and then you've got the operators, the people who are actually using the operating system itself and trying to keep it running every day. So we've got on the, I'll start with the hardware side, So we've got on the, I'll start with the hardware side, which is something, as you know, RHEL success, and I think you talked about this with Matt, just in a few sessions earlier that the success of RHEL is really, hinges on our partnerships with the hardware partners and in this case, we've got, let's see, in RHEL 9 we've got all the usual hardware suspects and we've added, just recently in January, we added support for ARM servers, as general ARM server class hardware. And so that's something customers have been asking for, delighted to be shipping that in RHEL 9. So now ARM is kind of a first-class citizen, right? Alongside x86, PowerZ and all the other usual suspects. And then of course, working with our favorite public cloud providers. So making sure that RHEL 9 is available at AWS and Azure and GCP and all our other cloud friends, right? >> Yeah, you mentioned ARM, we're seeing ARM in the enterprise. We're obviously seeing ARM at the edge. You guys have been working with ARM for a long time. You're working with Intel, you're working with NVIDIA, you've got some announcements this week. Gunnar, how do you keep Linux from becoming Franken OS with all these capabilities? >> This is a great question. First is, the most important thing is to be working closely with, I mean, the whole point of Linux and the reason why Linux works is because you have all these people working together to make the same thing, right? And so fighting that is a bad idea. Working together with everyone, leaning into that collaboration, that's an important part of making it work over time. The other one is having, just like in any good relationship, having healthy boundaries. And so making sure that we're clear about the things that we need to keep stable and the places where we're allowed to innovate and striking the right balance between those two things, that allows us to continue to ship one coherent operating system while still keeping literally thousands of platforms happy. >> So you're not trying to suck in all the full function, you're trying to accommodate that function that the ecosystem is going to develop? >> Yeah, that's right. So the idea is that what we strive for is consistency across all of the infrastructures and then allowing for kind of optimizations and we still let ourselves take advantage of whatever indigenous feature might appear on, such an ARM chip or thus in a such cloud platform. But really, we're trying to deliver a uniform platform experience to the application developers, right? Because they can't be having, like there can't be kind of one version of RHEL over here and another version of RHEL over here, the ecosystem wouldn't work. The whole point of Linux and the whole point of Red Hat Enterprise Linux is to be the same so that everything else can be different. >> And what incentives do you use to keep customers current? >> To keep customers current? Well so the best thing to do I found is to meet customers where they are. So a lot of people think we release RHEL 9 at the same time we have Red Hat Enterprise Linux 8, we have Red Hat Enterprise Linux 7, all these are running at the same time, and then we also have multiple minor release streams inside those. So at any given time, we're running, let's say, a dozen different versions of RHEL are being maintained and kept up-to-date, and we do this precisely to make sure that we're not force marching people into the new version and they have a Red Hat Enterprise Linux subscription, they should just be able to sit there and enjoy the minor version that they like. And we try and keep that going for as long as possible. >> Even if it's 10 years out of date? >> So, 10 years, interesting you chose that number because that's the end of life. >> That's the end of the life cycle. >> Right. And so 10 years is about, that's the natural life of a given major release, but again inside that you have several 10-year life cycles kind of cascading on each other, right? So nine is the start of the next 10-year cycle while we're still living inside the 10-year cycle of seven and eight. So lots of options for customers. >> How are you thinking about the edge? how do you define, let's not go to the definition, but at high level. (Gunnar laughing) Like I've been in a conference last week. It was Dell Tech World, I'll just say it. They were sort of the edge to them was the retail store. >> Yeah. >> Lowe's, okay, cool, I guess that's edgy, I guess, But I think space is the edge. (Gunnar chuckling) >> Right, right, right. >> Or a vehicle. How do you think about the edge? All the above or but the exciting stuff to me is that far edge, but I wonder if you can comment. >> Yeah, so there's all kinds of taxonomies out there for the edge. For me, I'm a simple country product manager at heart and so, I try to keep it simple, right? And the way I think about the edge is, here's a use case in which somebody needs a small operating system that deploys on probably a small piece of hardware, usually varying sizes, but it could be pretty small. That thing needs to be updated without any human touching it, right? And it needs to be reliably maintained without any human touching it. Usually in the edge cases, actually touching the hardware is a very expensive proposition. So we're trying to be as hands off as possible. >> No truck rolls. >> No truck rolls ever, right, exactly. (Dave chuckling) And then, now that I've got that stable base, I'm going to go take an application. I'll probably put it in a container for simplicity's sake and same thing, I want to be able to deploy that application. If something goes wrong, I need to build a roll back to a known good state and then I need to set of management tools that allow me to touch things, make sure that everything is healthy, make sure that the updates roll out correctly, maybe do some AB testing, things like that. So I think about that as, that's the, when we talk about the edge case for RHEL, that's the horizontal use case and then we can do specializations inside particular verticals or particular industries, but at bottom that's the use case we're talking about when we talk about the edge. >> And an assumption of connectivity at some point? >> Yeah. >> Right, you didn't have to always be on. >> Intermittent, latent, eventual connectivity. >> Eventual connectivity. (chuckles) That's right in some tech terms. >> Red Hat was originally a one trick pony. I mean, RHEL was it and now you've got all of these other extensions and different markets that you expanded into. What's your role in coordinating what all those different functions are doing? >> Yes, you look at all the innovations we've made, whether it's in storage, whether it's in OpenShift and elsewhere, RHEL remains the beating heart, right? It's the place where everything starts. And so a lot of what my team does is, yes, we're trying to make all the partners happy, we're also trying to make our internal partners happy, right? So the OpenShift folks need stuff out of RHEL, just like any other software vendor. And so I really think about RHEL is yes, we're a platform, yes, we're a product in our own right, but we're also a service organization for all the other parts of the portfolio. And the reason for that is we need to make sure all this stuff works together, right? Part of the whole reasoning behind the Red Hat Portfolio at large is that each of these pieces build on each other and compliment each other, right? I think that's an important part of the Red Hat mission, the RHEL mission. >> There's an article in the journal yesterday about how the tech industry was sort of pounding the drum on H-1B visas, there's a limit. I think it's been the same limit since 2005, 65,000 a year. We are facing, customers are facing, you guys, I'm sure as well, we are, real skills shortage, there's a lack of talent. How are you seeing companies deal with that? What are you advising them? What are you guys doing yourselves? >> Yeah, it's interesting, especially as everybody went through some flavor of digital transformation during the pandemic and now everybody's going through some, and kind of connected to that, everybody's making a move to the public cloud. They're making operating system choices when they're making those platform choices, right? And I think what's interesting is that, what they're coming to is, "Well, I have a Linux skills shortage and for a thousand reasons the market has not provided enough Linux admins." I mean, these are very lucrative positions, right? With command a lot of money, you would expect their supply would eventually catch up, but for whatever reason, it's not catching up. So I can't solve this by throwing bodies at it so I need to figure out a more efficient way of running my Linux operation. People are making a couple choices. The first is they're ensuring that they have consistency in their operating system choices, whether it's on premise or in the cloud, or even out on the edge, if I have to juggle three, four different operating systems, as I'm going through these three or four different infrastructures, that doesn't make any sense, 'cause the one thing is most precious to me is my Linux talent, right? And so I need to make sure that they're consistent, optimized and efficient. The other thing they're doing is tooling and automation and especially through tools like Ansible, right? Being able to take advantage of as much automation as possible and much consistency as possible so that they can make the most of the Linux talent that they do have. And so with Red Hat Enterprise Linux 9, in particular, you see us make a big investment in things like more automation tools for things like SAP and SQL server deployments, you'll see us make investments in things like basic stuff like the web console, right? We should now be able to go and point and click and go basic Linux administration tasks that lowers the barrier to entry and makes it easier to find people to actually administer the systems that you have. >> As you move out onto these new platforms, particularly on the edge, many of them will be much smaller, limited function. How do you make the decisions about what features you're going to keep or what you're going to keep in RHEL when you're running on a thermostat? >> Okay, so let me be clear, I don't want RHEL to run on a thermostat. (everybody laughing) >> I gave you advantage over it. >> I can't handle the margins on something like that, but at the end. >> You're running on, you're running on the GM. >> Yeah, no that's, right? And so the, so the choice at the, the most important thing we can do is give customers the tools that they need to make the choice that's appropriate for their deployment. I have learned over several years in this business that if I start choosing what content a customer decide wants on their operating system I will always guess it wrong, right? So my job is to make sure that I have a library of reliable, secure software options for them, that they can use as ingredients into their solution. And I give them tools that allow them to kind of curate the operating system that they need. So that's the tool like Image Builder, which we just announced, the image builder service lets a customer go in and point and click and kind of compose the edge operating system they need, hit a button and now they have an atomic image that they can go deploy out on the edge reliably, right? >> Gunnar can you clarify the cadence of releases? >> Oh yeah. >> You guys, the change that you made there. >> Yeah. >> Why that change occurred and what what's the standard today? >> Yeah, so back when we released RHEl 8, so we were just talking about hardware and you know, it's ARM and X86, all these different kinds of hardware, the hardware market is internally. I tell everybody the hardware market just got real weird, right? It's just got, the schedules are crazy. We got so many more entrance. Everything is kind of out of sync from where it used to be, it used to be there was a metronome, right? You mentioned Moore's law earlier. It was like a 18 month metronome. Everybody could kind of set their watch to. >> Right. >> So that's gone, and so now we have so much hardware that we need to reconcile. The only way for us to provide the kind of stability and consistency that customers were looking for was to set a set our own clock. So we said three years for every major release, six months for every minor release and that we will ship a new minor release every six months and a new major release every three years, whether we need it or not. And that has value all by itself. It means that customers can now plan ahead of time and know, okay, in 36 months, the next major release is going to come on. And now that's something I can plan my workload around, that something I can plan a data center migration around, things like that. So the consistency of this and it was a terrifying promise to make three years ago. I am now delighted to announce that we actually made good on it three years later, right? And plan two again, three years from now. >> Is it follow up, is it primarily the processor, optionality and diversity, or as I was talking to an architect, system architect the other day in his premise was that we're moving from a processor centric world to a connect centric world, not just the processor, but the memories, the IO, the controllers, the nics and it's just keeping that system in balance. Does that affect you or is it primarily the processor? >> Oh, it absolutely affects us, yeah. >> How so? >> Yeah, so the operating system is the thing that everyone relies on to hide all that stuff from everybody else, right? And so if we cannot offer that abstraction from all of these hardware choices that people need to make, then we're not doing our job. And so that means we have to encompass all the hardware configurations and all the hardware use cases that we can in order to make an application successful. So if people want to go disaggregate all of their components, we have to let 'em do that. If they want to have a kind of more traditional kind of boxed up OEM experience, they should be able to do that too. So yeah, this is what I mean is because it is RHEL responsibility and our duty to make sure that people are insulated from all this chaos underneath, that is a good chunk of the job, yeah. >> The hardware and the OS used to be inseparable right before (indistinct) Hence the importance of hardware. >> Yeah, that's right. >> I'm curious how your job changes, so you just, every 36 months you roll on a new release, which you did today, you announced a new release. You go back into the workplace two days, how is life different? >> Not at all, so the only constant is change, right? And to be honest, a major release, that's a big event for our release teams. That's a big event for our engineering teams. It's a big event for our product management teams, but all these folks have moved on and like we're now we're already planning. RHEL 9.1 and 9.2 and 8.7 and the rest of the releases. And so it's kind of like brief celebration and then right back to work. >> Okay, don't change so much. >> What can we look forward to? What's the future look like of RHEL, RHEL 10? >> Oh yeah, more bigger, stronger, faster, more optimized for those and such and you get, >> Longer lower, wider. >> Yeah, that's right, yeah, that's right, yeah. >> I am curious about CentOS Stream because there was some controversy around the end of life for CentOS and the move to CentOS Stream. >> Yeah. >> A lot of people including me are not really clear on what stream is and how it differs from CentOS, can you clarify that? >> Absolutely, so when Red Hat Enterprise Linux was first created, this was back in the days of Red Hat Linux, right? And because we couldn't balance the needs of the hobbyist market from the needs of the enterprise market, we split into Red Hat Enterprise Linux and Fedora, okay? So then for 15 years, yeah, about 15 years we had Fedora which is where we took all of our risks. That was kind of our early program where we started integrating new components, new open source projects and all the rest of it. And then eventually we would take that innovation and then feed it into the next version of Red Hat Enterprise Linux. The trick with that is that the Red Hat Enterprise Linux work that we did was largely internal to Red Hat and wasn't accessible to partners. And we've just spent a lot of time talking about how much we need to be collaborating with partners. They really had, a lot of them had to wait until like the beta came out before they actually knew what was going to be in the box, okay, well that was okay for a while but now that the market is the way that it is, things are moving so quickly. We need a better way to allow partners to work together with us further upstream from the actual product development. So that's why we created CentOS Stream. So CentOS Stream is the place where we kind of host the party and people can watch the next version of Red Hat Enterprise get developed in real time, partners can come in and help, customers can come in and help. And we've been really proud of the fact that Red Hat Enterprise Linux 9 is the first release that came completely out of CentOS Stream. Another way of putting that is that Red Hat Enterprise Linux 9 is the first version of RHEL that was actually built, 80, 90% of it was built completely in the open. >> Okay, so that's the new playground. >> Yeah, that's right. >> You took a lot of negative pushback when you made the announcement, is that basically because the CentOS users didn't understand what you were doing? >> No, I think the, the CentOS Linux, when we brought CentOS Linux on, this was one of the things that we wanted to do, is we wanted to create this space where we could start collaborating with people. Here's the lesson we learned. It is very difficult to collaborate when you are downstream of the product you're trying to improve because you've already shipped the product. And so once you're for collaborating downstream, any changes you make have to go all the way up the water slide and before they can head all the way back down. So this was the real pivot that we made was moving that partnership and that collaboration activity from the downstream of Red Hat Enterprise Linux to putting it right in the critical path of Red Hat Enterprise Linux development. >> Great, well, thank you for that Gunnar. Thanks for coming on theCUBE, it's great to, >> Yeah, my pleasure. >> See you and have a great day tomorrow. Thanks, and we look forward to seeing you tomorrow. We start at 9:00 AM. East Coast time. I think the keynotes, we will be here right after that to break that down, Paul Gillin and myself. This is day one for theCUBE's coverage of Red Hat Summit 2022 from Boston. We'll see you tomorrow, thanks for watching. (upbeat music)

Published Date : May 10 2022

SUMMARY :

He's my cohost for the next day. Nice to be here, Dave, Paul. It's been a lot of iterations. It's the highest version that the success of RHEL is really, We're obviously seeing ARM at the edge. and the places where across all of the infrastructures Well so the best thing to do because that's the end of life. So nine is the start of to them was the retail store. But I think space is the edge. the exciting stuff to me And the way I think about the make sure that the updates That's right in some tech terms. that you expanded into. of the Red Hat mission, the RHEL mission. in the journal yesterday that lowers the barrier to entry particularly on the edge, Okay, so let me be clear, I can't handle the margins you're running on the GM. So that's the tool like Image Builder, You guys, the change I tell everybody the hardware market So the consistency of this but the memories, the IO, and all the hardware use cases that we can The hardware and the OS You go back into the workplace two days, Not at all, so the only Yeah, that's right, for CentOS and the move to CentOS Stream. but now that the market Here's the lesson we learned. Great, well, thank you for that Gunnar. to seeing you tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Gunnar HelleksonPERSON

0.99+

Paul GillinPERSON

0.99+

JanuaryDATE

0.99+

NVIDIAORGANIZATION

0.99+

DavePERSON

0.99+

tomorrowDATE

0.99+

Red Hat LinuxTITLE

0.99+

BostonLOCATION

0.99+

RHEL 9TITLE

0.99+

GunnarPERSON

0.99+

six monthsQUANTITY

0.99+

threeQUANTITY

0.99+

three yearsQUANTITY

0.99+

RHELTITLE

0.99+

Red Hat Enterprise LinuxTITLE

0.99+

Red Hat Enterprise LinuxTITLE

0.99+

FirstQUANTITY

0.99+

yesterdayDATE

0.99+

10-yearQUANTITY

0.99+

MattPERSON

0.99+

15 yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

last weekDATE

0.99+

RHEL 9.1TITLE

0.99+

sevenQUANTITY

0.99+

two daysQUANTITY

0.99+

9:00 AMDATE

0.99+

two thingsQUANTITY

0.99+

ARMORGANIZATION

0.99+

2005DATE

0.99+

LinuxTITLE

0.99+

CentOS LinuxTITLE

0.99+

RHEL 10TITLE

0.99+

eachQUANTITY

0.99+

PaulPERSON

0.99+

CentOS StreamTITLE

0.99+

Red Hat Enterprise Linux 7TITLE

0.99+

AWSORGANIZATION

0.99+

18 monthQUANTITY

0.99+

Red Hat Enterprise Linux 9TITLE

0.99+

Red Hat Enterprise Linux 8TITLE

0.99+

eightQUANTITY

0.99+

CentOSTITLE

0.99+

H-1BOTHER

0.99+

Red Hat Summit 2022EVENT

0.99+

36 monthsQUANTITY

0.99+

Red HatTITLE

0.99+

thousandsQUANTITY

0.99+

three years laterDATE

0.99+

firstQUANTITY

0.99+

first releaseQUANTITY

0.98+

Power Panel: Does Hardware Still Matter


 

(upbeat music) >> The ascendancy of cloud and SAS has shown new light on how organizations think about, pay for, and value hardware. Once sought after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays, and maximizing server utilization has been superseded by demand for cloud architects, DevOps pros, developers with expertise in microservices, container, application development, and like. Even a company like Dell, the largest hardware company in enterprise tech touts that it has more software engineers than those working in hardware. Begs the question, is hardware going the way of Coball? Well, not likely. Software has to run on something, but the labor needed to deploy, and troubleshoot, and manage hardware infrastructure is shifting. At the same time, we've seen the value flow also shifting in hardware. Once a world dominated by X86 processors value is flowing to alternatives like Nvidia and arm based designs. Moreover, other componentry like NICs, accelerators, and storage controllers are becoming more advanced, integrated, and increasingly important. The question is, does it matter? And if so, why does it matter and to whom? What does it mean to customers, workloads, OEMs, and the broader society? Hello and welcome to this week's Wikibon theCUBE Insights powered by ETR. In this breaking analysis, we've organized a special power panel of industry analysts and experts to address the question, does hardware still matter? Allow me to introduce the panel. Bob O'Donnell is president and chief analyst at TECHnalysis Research. Zeus Kerravala is the founder and principal analyst at ZK Research. David Nicholson is a CTO and tech expert. Keith Townson is CEO and founder of CTO Advisor. And Marc Staimer is the chief dragon slayer at Dragon Slayer Consulting and oftentimes a Wikibon contributor. Guys, welcome to theCUBE. Thanks so much for spending some time here. >> Good to be here. >> Thanks. >> Thanks for having us. >> Okay before we get into it, I just want to bring up some data from ETR. This is a survey that ETR does every quarter. It's a survey of about 1200 to 1500 CIOs and IT buyers and I'm showing a subset of the taxonomy here. This XY axis and the vertical axis is something called net score. That's a measure of spending momentum. It's essentially the percentage of customers that are spending more on a particular area than those spending less. You subtract the lesses from the mores and you get a net score. Anything the horizontal axis is pervasion in the data set. Sometimes they call it market share. It's not like IDC market share. It's just the percentage of activity in the data set as a percentage of the total. That red 40% line, anything over that is considered highly elevated. And for the past, I don't know, eight to 12 quarters, the big four have been AI and machine learning, containers, RPA and cloud and cloud of course is very impressive because not only is it elevated in the vertical access, but you know it's very highly pervasive on the horizontal. So what I've done is highlighted in red that historical hardware sector. The server, the storage, the networking, and even PCs despite the work from home are depressed in relative terms. And of course, data center collocation services. Okay so you're seeing obviously hardware is not... People don't have the spending momentum today that they used to. They've got other priorities, et cetera, but I want to start and go kind of around the horn with each of you, what is the number one trend that each of you sees in hardware and why does it matter? Bob O'Donnell, can you please start us off? >> Sure Dave, so look, I mean, hardware is incredibly important and one comment first I'll make on that slide is let's not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It's just a little bit more stable. It's not as subject to big jumps as we see certainly in other software areas. But look, the important thing that's happening in hardware is the diversification of the types of chip architectures we're seeing and how and where they're being deployed, right? You refer to this in your opening. We've moved from a world of x86 CPUs from Intel and AMD to things like obviously GPUs, DPUs. We've got VPU for, you know, computer vision processing. We've got AI-dedicated accelerators, we've got all kinds of other network acceleration tools and AI-powered tools. There's an incredible diversification of these chip architectures and that's been happening for a while but now we're seeing them more widely deployed and it's being done that way because workloads are evolving. The kinds of workloads that we're seeing in some of these software areas require different types of compute engines than traditionally we've had. The other thing is (coughs), excuse me, the power requirements based on where geographically that compute happens is also evolving. This whole notion of the edge, which I'm sure we'll get into a little bit more detail later is driven by the fact that where the compute actually sits closer to in theory the edge and where edge devices are, depending on your definition, changes the power requirements. It changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures. And that's a very long-term trend that I think we're going to continue to see play out through this decade and well into the 2030s as well. >> Excellent, great, great points. Thank you, Bob. Zeus up next, please. >> Yeah, and I think the other thing when you look at this chart to remember too is, you know, through the pandemic and the work from home period a lot of companies did put their office modernization projects on hold and you heard that echoed, you know, from really all the network manufacturers anyways. They always had projects underway to upgrade networks. They put 'em on hold. Now that people are starting to come back to the office, they're looking at that now. So we might see some change there, but Bob's right. The size of those market are quite a bit different. I think the other big trend here is the hardware companies, at least in the areas that I look at networking are understanding now that it's a combination of hardware and software and silicon that works together that creates that optimum type of performance and experience, right? So some things are best done in silicon. Some like data forwarding and things like that. Historically when you look at the way network devices were built, you did everything in hardware. You configured in hardware, they did all the data for you, and did all the management. And that's been decoupled now. So more and more of the control element has been placed in software. A lot of the high-performance things, encryption, and as I mentioned, data forwarding, packet analysis, stuff like that is still done in hardware, but not everything is done in hardware. And so it's a combination of the two. I think, for the people that work with the equipment as well, there's been more shift to understanding how to work with software. And this is a mistake I think the industry made for a while is we had everybody convinced they had to become a programmer. It's really more a software power user. Can you pull things out of software? Can you through API calls and things like that. But I think the big frame here is, David, it's a combination of hardware, software working together that really make a difference. And you know how much you invest in hardware versus software kind of depends on the performance requirements you have. And I'll talk about that later but that's really the big shift that's happened here. It's the vendors that figured out how to optimize performance by leveraging the best of all of those. >> Excellent. You guys both brought up some really good themes that we can tap into Dave Nicholson, please. >> Yeah, so just kind of picking up where Bob started off. Not only are we seeing the rise of a variety of CPU designs, but I think increasingly the connectivity that's involved from a hardware perspective, from a kind of a server or service design perspective has become increasingly important. I think we'll get a chance to look at this in more depth a little bit later but when you look at what happens on the motherboard, you know we're not in so much a CPU-centric world anymore. Various application environments have various demands and you can meet them by using a variety of components. And it's extremely significant when you start looking down at the component level. It's really important that you optimize around those components. So I guess my summary would be, I think we are moving out of the CPU-centric hardware model into more of a connectivity-centric model. We can talk more about that later. >> Yeah, great. And thank you, David, and Keith Townsend I really interested in your perspectives on this. I mean, for years you worked in a data center surrounded by hardware. Now that we have the software defined data center, please chime in here. >> Well, you know, I'm going to dig deeper into that software-defined data center nature of what's happening with hardware. Hardware is meeting software infrastructure as code is a thing. What does that code look like? We're still trying to figure out but servicing up these capabilities that the previous analysts have brought up, how do I ensure that I can get the level of services needed for the applications that I need? Whether they're legacy, traditional data center, workloads, AI ML, workloads, workloads at the edge. How do I codify that and consume that as a service? And hardware vendors are figuring this out. HPE, the big push into GreenLake as a service. Dale now with Apex taking what we need, these bare bone components, moving it forward with DDR five, six CXL, et cetera, and surfacing that as cold or as services. This is a very tough problem. As we transition from consuming a hardware-based configuration to this infrastructure as cold paradigm shift. >> Yeah, programmable infrastructure, really attacking that sort of labor discussion that we were having earlier, okay. Last but not least Marc Staimer, please. >> Thanks, Dave. My peers raised really good points. I agree with most of them, but I'm going to disagree with the title of this session, which is, does hardware matter? It absolutely matters. You can't run software on the air. You can't run it in an ephemeral cloud, although there's the technical cloud and that's a different issue. The cloud is kind of changed everything. And from a market perspective in the 40 plus years I've been in this business, I've seen this perception that hardware has to go down in price every year. And part of that was driven by Moore's law. And we're coming to, let's say a lag or an end, depending on who you talk to Moore's law. So we're not doubling our transistors every 18 to 24 months in a chip and as a result of that, there's been a higher emphasis on software. From a market perception, there's no penalty. They don't put the same pressure on software from the market to reduce the cost every year that they do on hardware, which kind of bass ackwards when you think about it. Hardware costs are fixed. Software costs tend to be very low. It's kind of a weird thing that we do in the market. And what's changing is we're now starting to treat hardware like software from an OPEX versus CapEx perspective. So yes, hardware matters. And we'll talk about that more in length. >> You know, I want to follow up on that. And I wonder if you guys have a thought on this, Bob O'Donnell, you and I have talked about this a little bit. Marc, you just pointed out that Moore's laws could have waning. Pat Gelsinger recently at their investor meeting said that he promised that Moore's law is alive and well. And the point I made in breaking analysis was okay, great. You know, Pat said, doubling transistors every 18 to 24 months, let's say that Intel can do that. Even though we know it's waning somewhat. Look at the M1 Ultra from Apple (chuckles). In about 15 months increased transistor density on their package by 6X. So to your earlier point, Bob, we have this sort of these alternative processors that are really changing things. And to Dave Nicholson's point, there's a whole lot of supporting components as well. Do you have a comment on that, Bob? >> Yeah, I mean, it's a great point, Dave. And one thing to bear in mind as well, not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised the other big point and I think it was Keith that mentioned it. CXL and interconnect on the chip itself is dramatically changing it. And a lot of the more interesting advances that are going to continue to drive Moore's law forward in terms of the way we think about performance, if perhaps not number of transistors per se, is the interconnects that become available. You're seeing the development of chiplets or tiles, people use different names, but the idea is you can have different components being put together eventually in sort of a Lego block style. And what that's also going to allow, not only is that going to give interesting performance possibilities 'cause of the faster interconnect. So you can share, have shared memory between things which for big workloads like AI, huge data sets can make a huge difference in terms of how you talk to memory over a network connection, for example, but not only that you're going to see more diversity in the types of solutions that can be built. So we're going to see even more choices in hardware from a silicon perspective because you'll be able to piece together different elements. And oh, by the way, the other benefit of that is we've reached a point in chip architectures where not everything benefits from being smaller. We've been so focused and so obsessed when it comes to Moore's law, to the size of each individual transistor and yes, for certain architecture types, CPUs and GPUs in particular, that's absolutely true, but we've already hit the point where things like RF for 5g and wifi and other wireless technologies and a whole bunch of other things actually don't get any better with a smaller transistor size. They actually get worse. So the beauty of these chiplet architectures is you could actually combine different chip manufacturing sizes. You know you hear about four nanometer and five nanometer along with 14 nanometer on a single chip, each one optimized for its specific application yet together, they can give you the best of all worlds. And so we're just at the very beginning of that era, which I think is going to drive a ton of innovation. Again, gets back to my comment about different types of devices located geographically different places at the edge, in the data center, you know, in a private cloud versus a public cloud. All of those things are going to be impacted and there'll be a lot more options because of this silicon diversity and this interconnect diversity that we're just starting to see. >> Yeah, David. David Nicholson's got a graphic on that. They're going to show later. Before we do that, I want to introduce some data. I actually want to ask Keith to comment on this before we, you know, go on. This next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware. And you can see the red is they had significant issues and it's most pronounced in laptops and networking hardware on the far right-hand side, but virtually all categories, firewalls, peripheral servers, storage are having moderately difficult procurement issues. That's the sort of pinkish or significant challenges. So Keith, I mean, what are you seeing with your customers in the hardware supply chains and bottlenecks? And you know we're seeing it with automobiles and appliances but so it goes beyond IT. The semiconductor, you know, challenges. What's been the impact on the buyer community and society and do you have any sense as to when it will subside? >> You know, I was just asked this question yesterday and I'm feeling the pain. People question, kind of a side project within the CTO advisor, we built a hybrid infrastructure, traditional IT data center that we're walking with the traditional customer and modernizing that data center. So it was, you know, kind of a snapshot of time in 2016, 2017, 10 gigabit, ARISTA switches, some older Dell's 730 XD switches, you know, speeds and feeds. And we said we would modern that with the latest Intel stack and connected to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges. I thought we'd easily migrate from 10 gig networking to 25 gig networking path that customers are going on. The 10 gig network switches that I bought used are now double the price because you can't get legacy 10 gig network switches because all of the manufacturers are focusing on the more profitable 25 gig for capacity, even the 25 gig switches. And we're focused on networking right now. It's hard to procure. We're talking about nine to 12 months or more lead time. So we're seeing customers adjust by adopting cloud. But if you remember early on in the pandemic, Microsoft Azure kind of gated customers that didn't have a capacity agreement. So customers are keeping an eye on that. There's a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware VP or some other virtualization technology where it doesn't matter who can get me the hardware, they can just get me the hardware because it's critically impacting projects and timelines. >> So that's a great setup Zeus for you with Keith mentioned the earlier the software-defined data center with software-defined networking and cloud. Do you see a day where networking hardware is monetized and it's all about the software, or are we there already? >> No, we're not there already. And I don't see that really happening any time in the near future. I do think it's changed though. And just to be clear, I mean, when you look at that data, this is saying customers have had problems procuring the equipment, right? And there's not a network vendor out there. I've talked to Norman Rice at Extreme, and I've talked to the folks at Cisco and ARISTA about this. They all said they could have had blowout quarters had they had the inventory to ship. So it's not like customers aren't buying this anymore. Right? I do think though, when it comes to networking network has certainly changed some because there's a lot more controls as I mentioned before that you can do in software. And I think the customers need to start thinking about the types of hardware they buy and you know, where they're going to use it and, you know, what its purpose is. Because I've talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it's bogged down, right? It just doesn't have the horsepower to run it. And, you know, even when you do that, you have to start thinking of the components you use. The NICs you buy. And I've talked to customers that have simply just gone through the process replacing a NIC card and a commodity box and had some performance problems and, you know, things like that. So if agility is more important than performance, then by all means try running software on commodity hardware. I think that works in some cases. If performance though is more important, that's when you need that kind of turnkey hardware system. And I've actually seen more and more customers reverting back to that model. In fact, when you talk to even some startups I think today about when they come to market, they're delivering things more on appliances because that's what customers want. And so there's this kind of app pivot this pendulum of agility and performance. And if performance absolutely matters, that's when you do need to buy these kind of turnkey, prebuilt hardware systems. If agility matters more, that's when you can go more to software, but the underlying hardware still does matter. So I think, you know, will we ever have a day where you can just run it on whatever hardware? Maybe but I'll long be retired by that point. So I don't care. >> Well, you bring up a good point Zeus. And I remember the early days of cloud, the narrative was, oh, the cloud vendors. They don't use EMC storage, they just run on commodity storage. And then of course, low and behold, you know, they've trot out James Hamilton to talk about all the custom hardware that they were building. And you saw Google and Microsoft follow suit. >> Well, (indistinct) been falling for this forever. Right? And I mean, all the way back to the turn of the century, we were calling for the commodity of hardware. And it's never really happened because you can still drive. As long as you can drive innovation into it, customers will always lean towards the innovation cycles 'cause they get more features faster and things. And so the vendors have done a good job of keeping that cycle up but it'll be a long time before. >> Yeah, and that's why you see companies like Pure Storage. A storage company has 69% gross margins. All right. I want to go jump ahead. We're going to bring up the slide four. I want to go back to something that Bob O'Donnell was talking about, the sort of supporting act. The diversity of silicon and we've marched to the cadence of Moore's law for decades. You know, we asked, you know, is Moore's law dead? We say it's moderating. Dave Nicholson. You want to talk about those supporting components. And you shared with us a slide that shift. You call it a shift from a processor-centric world to a connect-centric world. What do you mean by that? And let's bring up slide four and you can talk to that. >> Yeah, yeah. So first, I want to echo this sentiment that the question does hardware matter is sort of the answer is of course it matters. Maybe the real question should be, should you care about it? And the answer to that is it depends who you are. If you're an end user using an application on your mobile device, maybe you don't care how the architecture is put together. You just care that the service is delivered but as you back away from that and you get closer and closer to the source, someone needs to care about the hardware and it should matter. Why? Because essentially what hardware is doing is it's consuming electricity and dollars and the more efficiently you can configure hardware, the more bang you're going to get for your buck. So it's not only a quantitative question in terms of how much can you deliver? But it also ends up being a qualitative change as capabilities allow for things we couldn't do before, because we just didn't have the aggregate horsepower to do it. So this chart actually comes out of some performance tests that were done. So it happens to be Dell servers with Broadcom components. And the point here was to peel back, you know, peel off the top of the server and look at what's in that server, starting with, you know, the PCI interconnect. So PCIE gen three, gen four, moving forward. What are the effects on from an interconnect versus on performance application performance, translating into new orders per minute, processed per dollar, et cetera, et cetera? If you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance, you can see that CPU architecture is sort of lagging behind in a way. And Bob mentioned this idea of tiling and all of the different ways to get around that. When we do performance testing, we can actually peg CPUs, just running the performance tests without any actual database environments working. So right now we're at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input. So the key thing here what this is highlighting is just as a very specific example, you take a card that's designed as a gen three PCIE device, and you plug it into a gen four slot. Now the card is the bottleneck. You plug a gen four card into a gen four slot. Now the gen four slot is the bottleneck. So we're constantly chasing these bottlenecks. Someone has to be focused on that from an architectural perspective, it's critically important. So there's no question that it matters. But of course, various people in this food chain won't care where it comes from. I guess a good analogy might be, where does our food come from? If I get a steak, it's a pink thing wrapped in plastic, right? Well, there are a lot of inputs that a lot of people have to care about to get that to me. Do I care about all of those things? No. Are they important? They're critically important. >> So, okay. So all I want to get to the, okay. So what does this all mean to customers? And so what I'm hearing from you is to balance a system it's becoming, you know, more complicated. And I kind of been waiting for this day for a long time, because as we all know the bottleneck was always the spinning disc, the last mechanical. So people who wrote software knew that when they were doing it right, the disc had to go and do stuff. And so they were doing other things in the software. And now with all these new interconnects and flash and things like you could do atomic rights. And so that opens up new software possibilities and combine that with alternative processes. But what's the so what on this to the customer and the application impact? Can anybody address that? >> Yeah, let me address that for a moment. I want to leverage some of the things that Bob said, Keith said, Zeus said, and David said, yeah. So I'm a bit of a contrarian in some of this. For example, on the chip side. As the chips get smaller, 14 nanometer, 10 nanometer, five nanometer, soon three nanometer, we talk about more cores, but the biggest problem on the chip is the interconnect from the chip 'cause the wires get smaller. People don't realize in 2004 the latency on those wires in the chips was 80 picoseconds. Today it's 1300 picoseconds. That's on the chip. This is why they're not getting faster. So we maybe getting a little bit slowing down in Moore's law. But even as we kind of conquer that you still have the interconnect problem and the interconnect problem goes beyond the chip. It goes within the system, composable architectures. It goes to the point where Keith made, ultimately you need a hybrid because what we're seeing, what I'm seeing and I'm talking to customers, the biggest issue they have is moving data. Whether it be in a chip, in a system, in a data center, between data centers, moving data is now the biggest gating item in performance. So if you want to move it from, let's say your transactional database to your machine learning, it's the bottleneck, it's moving the data. And so when you look at it from a distributed environment, now you've got to move the compute to the data. The only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute, the software, running on hardware closer to the data. Go ahead. >> So is this what you mean when Nicholson was talking about a shift from a processor centric world to a connectivity centric world? You're talking about moving the bits across all the different components, not having the processor you're saying is essentially becoming the bottleneck or the memory, I guess. >> Well, that's one of them and there's a lot of different bottlenecks, but it's the data movement itself. It's moving away from, wait, why do we need to move the data? Can we move the compute, the processing closer to the data? Because if we keep them separate and this has been a trend now where people are moving processing away from it. It's like the edge. I think it was Zeus or David. You were talking about the edge earlier. As you look at the edge, who defines the edge, right? Is the edge a closet or is it a sensor? If it's a sensor, how do you do AI at the edge? When you don't have enough power, you don't have enough computable. People were inventing chips to do that. To do all that at the edge, to do AI within the sensor, instead of moving the data to a data center or a cloud to do the processing. Because the lag in latency is always limited by speed of light. How fast can you move the electrons? And all this interconnecting, all the processing, and all the improvement we're seeing in the PCIE bus from three, to four, to five, to CXL, to a higher bandwidth on the network. And that's all great but none of that deals with the speed of light latency. And that's an-- Go ahead. >> You know Marc, no, I just want to just because what you're referring to could be looked at at a macro level, which I think is what you're describing. You can also look at it at a more micro level from a systems design perspective, right? I'm going to be the resident knuckle dragging hardware guy on the panel today. But it's exactly right. You moving compute closer to data includes concepts like peripheral cards that have built in intelligence, right? So again, in some of this testing that I'm referring to, we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for the like IO. Now you have essentially offload engines in the form of storage controllers, rate controllers, of course, for ethernet NICs, smart NICs. And so when you can have these sort of offload engines and we've gone through these waves over time. People think, well, wait a minute, raid controller and NVMe? You know, flash storage devices. Does that make sense? It turns out it does. Why? Because you're actually at a micro level doing exactly what you're referring to. You're bringing compute closer to the data. Now, closer to the data meaning closer to the data storage subsystem. It doesn't solve the macro issue that you're referring to but it is important. Again, going back to this idea of system design optimization, always chasing the bottleneck, plugging the holes. Someone needs to do that in this value chain in order to get the best value for every kilowatt hour of power and every dollar. >> Yeah. >> Well this whole drive performance has created some really interesting architectural designs, right? Like Nickelson, the rise of the DPU right? Brings more processing power into systems that already had a lot of processing power. There's also been some really interesting, you know, kind of innovation in the area of systems architecture too. If you look at the way Nvidia goes to market, their drive kit is a prebuilt piece of hardware, you know, optimized for self-driving cars, right? They partnered with Pure Storage and ARISTA to build that AI-ready infrastructure. I remember when I talked to Charlie Giancarlo, the CEO of Pure about when the three companies rolled that out. He said, "Look, if you're going to do AI, "you need good store. "You need fast storage, fast processor and fast network." And so for customers to be able to put that together themselves was very, very difficult. There's a lot of software that needs tuning as well. So the three companies partner together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it. And so in that case, in some ways the hardware was leading the software innovation. And so, the variety of different architectures we have today around hardware has really exploded. And I think it, part of the what Bob brought up at the beginning about the different chip design. >> Yeah, Bob talked about that earlier. Bob, I mean, most AI today is modeling, you know, and a lot of that's done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge. And that's a radically different architecture, Bob, isn't it? >> It is, it's a completely different architecture. And just to follow up on a couple points, excellent conversation guys. Dave talked about system architecture and really this that's what this boils down to, right? But it's looking at architecture at every level. I was talking about the individual different components the new interconnect methods. There's this new thing called UCIE universal connection. I forget what it stands answer for, but it's a mechanism for doing chiplet architectures, but then again, you have to take it up to the system level, 'cause it's all fine and good. If you have this SOC that's tuned and optimized, but it has to talk to the rest of the system. And that's where you see other issues. And you've seen things like CXL and other interconnect standards, you know, and nobody likes to talk about interconnect 'cause it's really wonky and really technical and not that sexy, but at the end of the day it's incredibly important exactly. To the other points that were being raised like mark raised, for example, about getting that compute closer to where the data is and that's where again, a diversity of chip architectures help and exactly to your last comment there Dave, putting that ability in an edge device is really at the cutting edge of what we're seeing on a semiconductor design and the ability to, for example, maybe it's an FPGA, maybe it's a dedicated AI chip. It's another kind of chip architecture that's being created to do that inferencing on the edge. Because again, it's that the cost and the challenges of moving lots of data, whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters. And the other thing is we're tackling bigger problems. So architecturally, not even just architecturally within a system, but when we think about DPUs and the sort of the east west data center movement conversation that we hear Nvidia and others talk about, it's about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data. So really is about tackling where the processing is needed, having the interconnect and the ability to get where the data you need to the right place at the right time. And because those needs are diversifying, we're just going to continue to see an explosion of different choices and options, which is going to make hardware even more essential I would argue than it is today. And so I think what we're going to see not only does hardware matter, it's going to matter even more in the future than it does now. >> Great, yeah. Great discussion, guys. I want to bring Keith back into the conversation here. Keith, if your main expertise in tech is provisioning LUNs, you probably you want to look for another job. So maybe clearly hardware matters, but with software defined everything, do people with hardware expertise matter outside of for instance, component manufacturers or cloud companies? I mean, VMware certainly changed the dynamic in servers. Dell just spun off its most profitable asset and VMware. So it obviously thinks hardware can stand alone. How does an enterprise architect view the shift to software defined hyperscale cloud and how do you see the shifting demand for skills in enterprise IT? >> So I love the question and I'll take a different view of it. If you're a data analyst and your primary value add is that you do ETL transformation, talk to a CDO, a chief data officer over midsize bank a little bit ago. He said 80% of his data scientists' time is done on ETL. Super not value ad. He wants his data scientists to do data science work. Chances are if your only value is that you do LUN provisioning, then you probably don't have a job now. The technologies have gotten much more intelligent. As infrastructure pros, we want to give infrastructure pros the opportunities to shine and I think the software defined nature and the automation that we're seeing vendors undertake, whether it's Dell, HP, Lenovo take your pick that Pure Storage, NetApp that are doing the automation and the ML needed so that these practitioners don't spend 80% of their time doing LUN provisioning and focusing on their true expertise, which is ensuring that data is stored. Data is retrievable, data's protected, et cetera. I think the shift is to focus on that part of the job that you're ensuring no matter where the data's at, because as my data is spread across the enterprise hybrid different types, you know, Dave, you talk about the super cloud a lot. If my data is in the super cloud, protecting that data and securing that data becomes much more complicated when than when it was me just procuring or provisioning LUNs. So when you say, where should the shift be, or look be, you know, focusing on the real value, which is making sure that customers can access data, can recover data, can get data at performance levels that they need within the price point. They need to get at those datasets and where they need it. We talked a lot about where they need out. One last point about this interconnecting. I have this vision and I think we all do of composable infrastructure. This idea that scaled out does not solve every problem. The cloud can give me infinite scale out. Sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances that single OS does not exist today. And the opportunity is to create composable infrastructure so that we solve a lot of these problems that just simply don't scale out. >> You know, wow. So many interesting points there. I had just interviewed Zhamak Dehghani, who's the founder of Data Mesh last week. And she made a really interesting point. She said, "Think about, we have separate stacks. "We have an application stack and we have "a data pipeline stack and the transaction systems, "the transaction database, we extract data from that," to your point, "We ETL it in, you know, it takes forever. "And then we have this separate sort of data stack." If we're going to inject more intelligence and data and AI into applications, those two stacks, her contention is they have to come together. And when you think about, you know, super cloud bringing compute to data, that was what Haduck was supposed to be. It ended up all sort of going into a central location, but it's almost a rhetorical question. I mean, it seems that that necessitates new thinking around hardware architectures as it kind of everything's the edge. And the other point is to your point, Keith, it's really hard to secure that. So when you can think about offloads, right, you've heard the stats, you know, Nvidia talks about it. Broadcom talks about it that, you know, that 30%, 25 to 30% of the CPU cycles are wasted on doing things like storage offloads, or networking or security. It seems like maybe Zeus you have a comment on this. It seems like new architectures need to come other to support, you know, all of that stuff that Keith and I just dispute. >> Yeah, and by the way, I do want to Keith, the question you just asked. Keith, it's the point I made at the beginning too about engineers do need to be more software-centric, right? They do need to have better software skills. In fact, I remember talking to Cisco about this last year when they surveyed their engineer base, only about a third of 'em had ever made an API call, which you know that that kind of shows this big skillset change, you know, that has to come. But on the point of architectures, I think the big change here is edge because it brings in distributed compute models. Historically, when you think about compute, even with multi-cloud, we never really had multi-cloud. We'd use multiple centralized clouds, but compute was always centralized, right? It was in a branch office, in a data center, in a cloud. With edge what we creates is the rise of distributed computing where we'll have an application that actually accesses different resources and at different edge locations. And I think Marc, you were talking about this, like the edge could be in your IoT device. It could be your campus edge. It could be cellular edge, it could be your car, right? And so we need to start thinkin' about how our applications interact with all those different parts of that edge ecosystem, you know, to create a single experience. The consumer apps, a lot of consumer apps largely works that way. If you think of like app like Uber, right? It pulls in information from all kinds of different edge application, edge services. And, you know, it creates pretty cool experience. We're just starting to get to that point in the business world now. There's a lot of security implications and things like that, but I do think it drives more architectural decisions to be made about how I deploy what data where and where I do my processing, where I do my AI and things like that. It actually makes the world more complicated. In some ways we can do so much more with it, but I think it does drive us more towards turnkey systems, at least initially in order to, you know, ensure performance and security. >> Right. Marc, I wanted to go to you. You had indicated to me that you wanted to chat about this a little bit. You've written quite a bit about the integration of hardware and software. You know, we've watched Oracle's move from, you know, buying Sun and then basically using that in a highly differentiated approach. Engineered systems. What's your take on all that? I know you also have some thoughts on the shift from CapEx to OPEX chime in on that. >> Sure. When you look at it, there are advantages to having one vendor who has the software and hardware. They can synergistically make them work together that you can't do in a commodity basis. If you own the software and somebody else has the hardware, I'll give you an example would be Oracle. As you talked about with their exit data platform, they literally are leveraging microcode in the Intel chips. And now in AMD chips and all the way down to Optane, they make basically AMD database servers work with Optane memory PMM in their storage systems, not MVME, SSD PMM. I'm talking about the cards itself. So there are advantages you can take advantage of if you own the stack, as you were putting out earlier, Dave, of both the software and the hardware. Okay, that's great. But on the other side of that, that tends to give you better performance, but it tends to cost a little more. On the commodity side it costs less but you get less performance. What Zeus had said earlier, it depends where you're running your application. How much performance do you need? What kind of performance do you need? One of the things about moving to the edge and I'll get to the OPEX CapEx in a second. One of the issues about moving to the edge is what kind of processing do you need? If you're running in a CCTV camera on top of a traffic light, how much power do you have? How much cooling do you have that you can run this? And more importantly, do you have to take the data you're getting and move it somewhere else and get processed and the information is sent back? I mean, there are companies out there like Brain Chip that have developed AI chips that can run on the sensor without a CPU. Without any additional memory. So, I mean, there's innovation going on to deal with this question of data movement. There's companies out there like Tachyon that are combining GPUs, CPUs, and DPUs in a single chip. Think of it as super composable architecture. They're looking at being able to do more in less. On the OPEX and CapEx issue. >> Hold that thought, hold that thought on the OPEX CapEx, 'cause we're running out of time and maybe you can wrap on that. I just wanted to pick up on something you said about the integrated hardware software. I mean, other than the fact that, you know, Michael Dell unlocked whatever $40 billion for himself and Silverlake, I was always a fan of a spin in with VMware basically become the Oracle of hardware. Now I know it would've been a nightmare for the ecosystem and culturally, they probably would've had a VMware brain drain, but what does anybody have any thoughts on that as a sort of a thought exercise? I was always a fan of that on paper. >> I got to eat a little crow. I did not like the Dale VMware acquisition for the industry in general. And I think it hurt the industry in general, HPE, Cisco walked away a little bit from that VMware relationship. But when I talked to customers, they loved it. You know, I got to be honest. They absolutely loved the integration. The VxRail, VxRack solution exploded. Nutanix became kind of a afterthought when it came to competing. So that spin in, when we talk about the ability to innovate and the ability to create solutions that you just simply can't create because you don't have the full stack. Dell was well positioned to do that with a potential span in of VMware. >> Yeah, we're going to be-- Go ahead please. >> Yeah, in fact, I think you're right, Keith, it was terrible for the industry. Great for Dell. And I remember talking to Chad Sakac when he was running, you know, VCE, which became Rack and Rail, their ability to stay in lockstep with what VMware was doing. What was the number one workload running on hyperconverged forever? It was VMware. So their ability to remain in lockstep with VMware gave them a huge competitive advantage. And Dell came out of nowhere in, you know, the hyper-converged market and just started taking share because of that relationship. So, you know, this sort I guess it's, you know, from a Dell perspective I thought it gave them a pretty big advantage that they didn't really exploit across their other properties, right? Networking and service and things like they could have given the dominance that VMware had. From an industry perspective though, I do think it's better to have them be coupled. So. >> I agree. I mean, they could. I think they could have dominated in super cloud and maybe they would become the next Oracle where everybody hates 'em, but they kick ass. But guys. We got to wrap up here. And so what I'm going to ask you is I'm going to go and reverse the order this time, you know, big takeaways from this conversation today, which guys by the way, I can't thank you enough phenomenal insights, but big takeaways, any final thoughts, any research that you're working on that you want highlight or you know, what you look for in the future? Try to keep it brief. We'll go in reverse order. Maybe Marc, you could start us off please. >> Sure, on the research front, I'm working on a total cost of ownership of an integrated database analytics machine learning versus separate services. On the other aspect that I would wanted to chat about real quickly, OPEX versus CapEx, the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software. As you use it, pay for what you use in arrears. The good thing about that is you're only paying for what you use, period. You're not for what you don't use. I mean, it's compute time, everything else. The bad side about that is you have no predictability in your bill. It's elastic, but every user I've talked to says every month it's different. And from a budgeting perspective, it's very hard to set up your budget year to year and it's causing a lot of nightmares. So it's just something to be aware of. From a CapEx perspective, you have no more CapEx if you're using that kind of base system but you lose a certain amount of control as well. So ultimately that's some of the issues. But my biggest point, my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about Keith or other aspects moving it between hybrid locations, moving it within a system, moving it within a chip. All those are key issues. >> Great, thank you. Okay, CTO advisor, give us your final thoughts. >> All right. Really, really great commentary. Again, I'm going to point back to us taking the walk that our customers are taking, which is trying to do this conversion of all primary data center to a hybrid of which I have this hard earned philosophy that enterprise IT is additive. When we add a service, we rarely subtract a service. So the landscape and service area what we support has to grow. So our research focuses on taking that walk. We are taking a monolithic application, decomposing that to containers, and putting that in a public cloud, and connecting that back private data center and telling that story and walking that walk with our customers. This has been a super enlightening panel. >> Yeah, thank you. Real, real different world coming. David Nicholson, please. >> You know, it really hearkens back to the beginning of the conversation. You talked about momentum in the direction of cloud. I'm sort of spending my time under the hood, getting grease under my fingernails, focusing on where still the lions share of spend will be in coming years, which is OnPrem. And then of course, obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture. I know we all know Sapphire Rapids pushed into the future. When's the next Intel release coming? Who knows? We think, you know, in 2023. There have been a lot of people standing by from a practitioner's standpoint asking, well, what do I do between now and then? Does it make sense to upgrade bits and pieces of hardware or go from a last generation to a current generation when we know the next generation is coming? And so I've been very, very focused on looking at how these connectivity components like rate controllers and NICs. I know it's not as sexy as talking about cloud but just how these opponents completely change the game and actually can justify movement from say a 14th-generation architecture to a 15th-generation architecture today, even though gen 16 is coming, let's say 12 months from now. So that's where I am. Keep my phone number in the Rolodex. I literally reference Rolodex intentionally because like I said, I'm in there under the hood and it's not as sexy. But yeah, so that's what I'm focused on Dave. >> Well, you know, to paraphrase it, maybe derivative paraphrase of, you know, Larry Ellison's rant on what is cloud? It's operating systems and databases, et cetera. Rate controllers and NICs live inside of clouds. All right. You know, one of the reasons I love working with you guys is 'cause have such a wide observation space and Zeus Kerravala you, of all people, you know you have your fingers in a lot of pies. So give us your final thoughts. >> Yeah, I'm not a propeller heady as my chip counterparts here. (all laugh) So, you know, I look at the world a little differently and a lot of my research I'm doing now is the impact that distributed computing has on customer employee experiences, right? You talk to every business and how the experiences they deliver to their customers is really differentiating how they go to market. And so they're looking at these different ways of feeding up data and analytics and things like that in different places. And I think this is going to have a really profound impact on enterprise IT architecture. We're putting more data, more compute in more places all the way down to like little micro edges and retailers and things like that. And so we need the variety. Historically, if you think back to when I was in IT you know, pre-Y2K, we didn't have a lot of choice in things, right? We had a server that was rack mount or standup, right? And there wasn't a whole lot of, you know, differences in choice. But today we can deploy, you know, these really high-performance compute systems on little blades inside servers or inside, you know, autonomous vehicles and things. I think the world from here gets... You know, just the choice of what we have and the way hardware and software works together is really going to, I think, change the world the way we do things. We're already seeing that, like I said, in the consumer world, right? There's so many things you can do from, you know, smart home perspective, you know, natural language processing, stuff like that. And it's starting to hit businesses now. So just wait and watch the next five years. >> Yeah, totally. The computing power at the edge is just going to be mind blowing. >> It's unbelievable what you can do at the edge. >> Yeah, yeah. Hey Z, I just want to say that we know you're not a propeller head and I for one would like to thank you for having your master's thesis hanging on the wall behind you 'cause we know that you studied basket weaving. >> I was actually a physics math major, so. >> Good man. Another math major. All right, Bob O'Donnell, you're going to bring us home. I mean, we've seen the importance of semiconductors and silicon in our everyday lives, but your last thoughts please. >> Sure and just to clarify, by the way I was a great books major and this was actually for my final paper. And so I was like philosophy and all that kind of stuff and literature but I still somehow got into tech. Look, it's been a great conversation and I want to pick up a little bit on a comment Zeus made, which is this it's the combination of the hardware and the software and coming together and the manner with which that needs to happen, I think is critically important. And the other thing is because of the diversity of the chip architectures and all those different pieces and elements, it's going to be how software tools evolve to adapt to that new world. So I look at things like what Intel's trying to do with oneAPI. You know, what Nvidia has done with CUDA. What other platform companies are trying to create tools that allow them to leverage the hardware, but also embrace the variety of hardware that is there. And so as those software development environments and software development tools evolve to take advantage of these new capabilities, that's going to open up a lot of interesting opportunities that can leverage all these new chip architectures. That can leverage all these new interconnects. That can leverage all these new system architectures and figure out ways to make that all happen, I think is going to be critically important. And then finally, I'll mention the research I'm actually currently working on is on private 5g and how companies are thinking about deploying private 5g and the potential for edge applications for that. So I'm doing a survey of several hundred us companies as we speak and really looking forward to getting that done in the next couple of weeks. >> Yeah, look forward to that. Guys, again, thank you so much. Outstanding conversation. Anybody going to be at Dell tech world in a couple of weeks? Bob's going to be there. Dave Nicholson. Well drinks on me and guys I really can't thank you enough for the insights and your participation today. Really appreciate it. Okay, and thank you for watching this special power panel episode of theCube Insights powered by ETR. Remember we publish each week on Siliconangle.com and wikibon.com. All these episodes they're available as podcasts. DM me or any of these guys. I'm at DVellante. You can email me at David.Vellante@siliconangle.com. Check out etr.ai for all the data. This is Dave Vellante. We'll see you next time. (upbeat music)

Published Date : Apr 25 2022

SUMMARY :

but the labor needed to go kind of around the horn the applications to those edge devices Zeus up next, please. on the performance requirements you have. that we can tap into It's really important that you optimize I mean, for years you worked for the applications that I need? that we were having earlier, okay. on software from the market And the point I made in breaking at the edge, in the data center, you know, and society and do you have any sense as and I'm feeling the pain. and it's all about the software, of the components you use. And I remember the early days And I mean, all the way back Yeah, and that's why you see And the answer to that is the disc had to go and do stuff. the compute to the data. So is this what you mean when Nicholson the processing closer to the data? And so when you can have kind of innovation in the area that the future is going to be the ability to get where and how do you see the shifting demand And the opportunity is to to support, you know, of that edge ecosystem, you know, that you wanted to chat One of the things about moving to the edge I mean, other than the and the ability to create solutions Yeah, we're going to be-- And I remember talking to Chad the order this time, you know, in the sense that you can use hardware us your final thoughts. So the landscape and service area Yeah, thank you. in the direction of cloud. You know, one of the reasons And I think this is going to The computing power at the edge you can do at the edge. on the wall behind you I was actually a of semiconductors and silicon and the manner with which Okay, and thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Marc StaimerPERSON

0.99+

Keith TownsonPERSON

0.99+

David NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

KeithPERSON

0.99+

Dave VellantePERSON

0.99+

MarcPERSON

0.99+

Bob O'DonnellPERSON

0.99+

DellORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

BobPERSON

0.99+

HPORGANIZATION

0.99+

LenovoORGANIZATION

0.99+

2004DATE

0.99+

Charlie GiancarloPERSON

0.99+

ZK ResearchORGANIZATION

0.99+

PatPERSON

0.99+

10 nanometerQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

10 gigQUANTITY

0.99+

25QUANTITY

0.99+

Pat GelsingerPERSON

0.99+

80%QUANTITY

0.99+

ARISTAORGANIZATION

0.99+

64 terabytesQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Zeus KerravalaPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Larry EllisonPERSON

0.99+

25 gigQUANTITY

0.99+

14 nanometerQUANTITY

0.99+

2017DATE

0.99+

2016DATE

0.99+

Norman RicePERSON

0.99+

OracleORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Michael DellPERSON

0.99+

69%QUANTITY

0.99+

30%QUANTITY

0.99+

OPEXORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

$40 billionQUANTITY

0.99+

Dragon Slayer ConsultingORGANIZATION

0.99+

Breaking Analysis: The Improbable Rise of Kubernetes


 

>> From theCUBE studios in Palo Alto, in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vollante. >> The rise of Kubernetes came about through a combination of forces that were, in hindsight, quite a long shot. Amazon's dominance created momentum for Cloud native application development, and the need for newer and simpler experiences, beyond just easily spinning up computer as a service. This wave crashed into innovations from a startup named Docker, and a reluctant competitor in Google, that needed a way to change the game on Amazon and the Cloud. Now, add in the effort of Red Hat, which needed a new path beyond Enterprise Linux, and oh, by the way, it was just about to commit to a path of a Kubernetes alternative for OpenShift and figure out a governance structure to hurt all the cats and the ecosystem and you get the remarkable ascendancy of Kubernetes. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis, we tapped the back stories of a new documentary that explains the improbable events that led to the creation of Kubernetes. We'll share some new survey data from ETR and commentary from the many early the innovators who came on theCUBE during the exciting period since the founding of Docker in 2013, which marked a new era in computing, because we're talking about Kubernetes and developers today, the hoodie is on. And there's a new two part documentary that I just referenced, it's out and it was produced by Honeypot on Kubernetes, part one and part two, tells a story of how Kubernetes came to prominence and many of the players that made it happen. Now, a lot of these players, including Tim Hawkin Kelsey Hightower, Craig McLuckie, Joe Beda, Brian Grant Solomon Hykes, Jerry Chen and others came on theCUBE during formative years of containers going mainstream and the rise of Kubernetes. John Furrier and Stu Miniman were at the many shows we covered back then and they unpacked what was happening at the time. We'll share the commentary from the guests that they interviewed and try to add some context. Now let's start with the concept of developer defined structure, DDI. Jerry Chen was at VMware and he could see the trends that were evolving. He left VMware to become a venture capitalist at Greylock. Docker was his first investment. And he saw the future this way. >> What happens is when you define infrastructure software you can program it. You make it portable. And that the beauty of this cloud wave what I call DDI's. Now, to your point is every piece of infrastructure from storage, networking, to compute has an API, right? And, and AWS there was an early trend where S3, EBS, EC2 had API. >> As building blocks too. >> As building blocks, exactly. >> Not monolithic. >> Monolithic building blocks every little building bone block has it own API and just like Docker really is the API for this unit of the cloud enables developers to define how they want to build their applications, how to network them know as Wills talked about, and how you want to secure them and how you want to store them. And so the beauty of this generation is now developers are determining how apps are built, not just at the, you know, end user, you know, iPhone app layer the data layer, the storage layer, the networking layer. So every single level is being disrupted by this concept of a DDI and where, how you build use and actually purchase IT has changed. And you're seeing the incumbent vendors like Oracle, VMware Microsoft try to react but you're seeing a whole new generation startup. >> Now what Jerry was explaining is that this new abstraction layer that was being built here's some ETR data that quantifies that and shows where we are today. The chart shows net score or spending momentum on the vertical axis and market share which represents the pervasiveness in the survey set. So as Jerry and the innovators who created Docker saw the cloud was becoming prominent and you can see it still has spending velocity that's elevated above that 40% red line which is kind of a magic mark of momentum. And of course, it's very prominent on the X axis as well. And you see the low level infrastructure virtualization and that even floats above servers and storage and networking right. Back in 2013 the conversation with VMware. And by the way, I remember having this conversation deeply at the time with Chad Sakac was we're going to make this low level infrastructure invisible, and we intend to make virtualization invisible, IE simplified. And so, you see above the two arrows there related to containers, container orchestration and container platforms, which are abstraction layers and services above the underlying VMs and hardware. And you can see the momentum that they have right there with the cloud and AI and RPA. So you had these forces that Jerry described that were taking shape, and this picture kind of summarizes how they came together to form Kubernetes. And the upper left, Of course you see AWS and we inserted a picture from a post we did, right after the first reinvent in 2012, it was obvious to us at the time that the cloud gorilla was AWS and had all this momentum. Now, Solomon Hykes, the founder of Docker, you see there in the upper right. He saw the need to simplify the packaging of applications for cloud developers. Here's how he described it. Back in 2014 in theCUBE with John Furrier >> Container is a unit of deployment, right? It's the format in which you package your application all the files, all the executables libraries all the dependencies in one thing that you can move to any server and deploy in a repeatable way. So it's similar to how you would run an iOS app on an iPhone, for example. >> A Docker at the time was a 30% company and it just changed its name from .cloud. And back to the diagram you have Google with a red question mark. So why would you need more than what Docker had created. Craig McLuckie, who was a product manager at Google back then explains the need for yet another abstraction. >> We created the strong separation between infrastructure operations and application operations. And so, Docker has created a portable framework to take it, basically a binary and run it anywhere which is an amazing capability, but that's not enough. You also need to be able to manage that with a framework that can run anywhere. And so, the union of Docker and Kubernetes provides this framework where you're completely abstracted from the underlying infrastructure. You could use VMware, you could use Red Hat open stack deployment. You could run on another major cloud provider like rec. >> Now Google had this huge cloud infrastructure but no commercial cloud business compete with AWS. At least not one that was taken seriously at the time. So it needed a way to change the game. And it had this thing called Google Borg, which is a container management system and scheduler and Google looked at what was happening with virtualization and said, you know, we obviously could do better Joe Beda, who was with Google at the time explains their mindset going back to the beginning. >> Craig and I started up Google compute engine VM as a service. And the odd thing to recognize is that, nobody who had been in Google for a long time thought that there was anything to this VM stuff, right? Cause Google had been on containers for so long. That was their mindset board was the way that stuff was actually deployed. So, you know, my boss at the time, who's now at Cloudera booted up a VM for the first time, and anybody in the outside world be like, Hey, that's really cool. And his response was like, well now what? Right. You're sitting at a prompt. Like that's not super interesting. How do I run my app? Right. Which is, that's what everybody's been struggling with, with cloud is not how do I get a VM up? How do I actually run my code? >> Okay. So Google never really did virtualization. They were looking at the market and said, okay what can we do to make Google relevant in cloud. Here's Eric Brewer from Google. Talking on theCUBE about Google's thought process at the time. >> One interest things about Google is it essentially makes no use of virtual machines internally. And that's because Google started in 1998 which is the same year that VMware started was kind of brought the modern virtual machine to bear. And so Google infrastructure tends to be built really on kind of classic Unix processes and communication. And so scaling that up, you get a system that works a lot with just processes and containers. So kind of when I saw containers come along with Docker, we said, well, that's a good model for us. And we can take what we know internally which was called Borg a big scheduler. And we can turn that into Kubernetes and we'll open source it. And suddenly we have kind of a cloud version of Google that works the way we would like it to work. >> Now, Eric Brewer gave us the bumper sticker version of the story there. What he reveals in the documentary that I referenced earlier is that initially Google was like, why would we open source our secret sauce to help competitors? So folks like Tim Hockin and Brian Grant who were on the original Kubernetes team, went to management and pressed hard to convince them to bless open sourcing Kubernetes. Here's Hockin's explanation. >> When Docker landed, we saw the community building and building and building. I mean, that was a snowball of its own, right? And as it caught on we realized we know what this is going to we know once you embrace the Docker mindset that you very quickly need something to manage all of your Docker nodes, once you get beyond two or three of them, and we know how to build that, right? We got a ton of experience here. Like we went to our leadership and said, you know, please this is going to happen with us or without us. And I think it, the world would be better if we helped. >> So the open source strategy became more compelling as they studied the problem because it gave Google a way to neutralize AWS's advantage because with containers you could develop on AWS for example, and then run the application anywhere like Google's cloud. So it not only gave developers a path off of AWS. If Google could develop a strong service on GCP they could monetize that play. Now, focus your attention back to the diagram which shows this smiling, Alex Polvi from Core OS which was acquired by Red Hat in 2018. And he saw the need to bring Linux into the cloud. I mean, after all Linux was powering the internet it was the OS for enterprise apps. And he saw the need to extend its path into the cloud. Now here's how he described it at an OpenStack event in 2015. >> Similar to what happened with Linux. Like yes, there is still need for Linux and Windows and other OSs out there. But by and large on production, web infrastructure it's all Linux now. And you were able to get onto one stack. And how were you able to do that? It was, it was by having a truly open consistent API and a commitment into not breaking APIs and, so on. That allowed Linux to really become ubiquitous in the data center. Yes, there are other OSs, but Linux buy in large for production infrastructure, what is being used. And I think you'll see a similar phenomenon happen for this next level up cause we're treating the whole data center as a computer instead of trading one in visual instance is just the computer. And that's the stuff that Kubernetes to me and someone is doing. And I think there will be one that shakes out over time and we believe that'll be Kubernetes. >> So Alex saw the need for a dominant container orchestration platform. And you heard him, they made the right bet. It would be Kubernetes. Now Red Hat, Red Hat is been around since 1993. So it has a lot of on-prem. So it needed a future path to the cloud. So they rang up Google and said, hey. What do you guys have going on in this space? So Google, was kind of non-committal, but it did expose that they were thinking about doing something that was you know, pre Kubernetes. It was before it was called Kubernetes. But hey, we have this thing and we're thinking about open sourcing it, but Google's internal debates, and you know, some of the arm twisting from the engine engineers, it was taking too long. So Red Hat said, well, screw it. We got to move forward with OpenShift. So we'll do what Apple and Airbnb and Heroku are doing and we'll build on an alternative. And so they were ready to go with Mesos which was very much more sophisticated than Kubernetes at the time and much more mature, but then Google the last minute said, hey, let's do this. So Clayton Coleman with Red Hat, he was an architect. And he leaned in right away. He was one of the first outside committers outside of Google. But you still led these competing forces in the market. And internally there were debates. Do we go with simplicity or do we go with system scale? And Hen Goldberg from Google explains why they focus first on simplicity in getting that right. >> We had to defend of why we are only supporting 100 nodes in the first release of Kubernetes. And they explained that they know how to build for scale. They've done that. They know how to do it, but realistically most of users don't need large clusters. So why create this complexity? >> So Goldberg explains that rather than competing right away with say Mesos or Docker swarm, which were far more baked they made the bet to keep it simple and go for adoption and ubiquity, which obviously turned out to be the right choice. But the last piece of the puzzle was governance. Now Google promised to open source Kubernetes but when it started to open up to contributors outside of Google, the code was still controlled by Google and developers had to sign Google paper that said Google could still do whatever it wanted. It could sub license, et cetera. So Google had to pass the Baton to an independent entity and that's how CNCF was started. Kubernetes was its first project. And let's listen to Chris Aniszczyk of the CNCF explain >> CNCF is all about providing a neutral home for cloud native technology. And, you know, it's been about almost two years since our first board meeting. And the idea was, you know there's a certain set of technology out there, you know that are essentially microservice based that like live in containers that are essentially orchestrated by some process, right? That's essentially what we mean when we say cloud native right. And CNCF was seated with Kubernetes as its first project. And you know, as, as we've seen over the last couple years Kubernetes has grown, you know, quite well they have a large community a diverse con you know, contributor base and have done, you know, kind of extremely well. They're one of actually the fastest, you know highest velocity, open source projects out there, maybe. >> Okay. So this is how we got to where we are today. This ETR data shows container orchestration offerings. It's the same X Y graph that we showed earlier. And you can see where Kubernetes lands not we're standing that Kubernetes not a company but respondents, you know, they doing Kubernetes. They maybe don't know, you know, whose platform and it's hard with the ETR taxon economy as a fuzzy and survey data because Kubernetes is increasingly becoming embedded into cloud platforms. And IT pros, they may not even know which one specifically. And so the reason we've linked these two platforms Kubernetes and Red Hat OpenShift is because OpenShift right now is a dominant revenue player in the space and is increasingly popular PaaS layer. Yeah. You could download Kubernetes and do what you want with it. But if you're really building enterprise apps you're going to need support. And that's where OpenShift comes in. And there's not much data on this but we did find this chart from AMDA which show was the container software market, whatever that really is. And Red Hat has got 50% of it. This is revenue. And, you know, we know the muscle of IBM is behind OpenShift. So there's really not hard to believe. Now we've got some other data points that show how Kubernetes is becoming less visible and more embedded under of the hood. If you will, as this chart shows this is data from CNCF's annual survey they had 1800 respondents here, and the data showed that 79% of respondents use certified Kubernetes hosted platforms. Amazon elastic container service for Kubernetes was the most prominent 39% followed by Azure Kubernetes service at 23% in Azure AKS engine at 17%. With Google's GKE, Google Kubernetes engine behind those three. Now. You have to ask, okay, Google. Google's management Initially they had concerns. You know, why are we open sourcing such a key technology? And the premise was, it would level the playing field. And for sure it has, but you have to ask has it driven the monetization Google was after? And I would've to say no, it probably didn't. But think about where Google would've been. If it hadn't open source Kubernetes how relevant would it be in the cloud discussion. Despite its distant third position behind AWS and Microsoft or even fourth, if you include Alibaba without Kubernetes Google probably would be much less prominent or possibly even irrelevant in cloud, enterprise cloud. Okay. Let's wrap up with some comments on the state of Kubernetes and maybe a thought or two about, you know, where we're headed. So look, no shocker Kubernetes for all its improbable beginning has gone mainstream in the past year or so. We're seeing much more maturity and support for state full workloads and big ecosystem support with respect to better security and continued simplification. But you know, it's still pretty complex. It's getting better, but it's not VMware level of maturity. For example, of course. Now adoption has always been strong for Kubernetes, for cloud native companies who start with containers on day one, but we're seeing many more. IT organizations adopting Kubernetes as it matures. It's interesting, you know, Docker set out to be the system of the cloud and Kubernetes has really kind of become that. Docker desktop is where Docker's action really is. That's where Docker is thriving. It sold off Docker swarm to Mirantis has made some tweaks. Docker has made some tweaks to its licensing model to be able to continue to evolve its its business. To hear more about that at DockerCon. And as we said, years ago we expected Kubernetes to become less visible Stu Miniman and I talked about this in one of our predictions post and really become more embedded into other platforms. And that's exactly what's happening here but it's still complicated. Remember, remember the... Go back to the early and mid cycle of VMware understanding things like application performance you needed folks in lab coats to really remediate problems and dig in and peel the onion and scale the system you know, and in some ways you're seeing that dynamic repeated with Kubernetes, security performance scale recovery, when something goes wrong all are made more difficult by the rapid pace at which the ecosystem is evolving Kubernetes. But it's definitely headed in the right direction. So what's next for Kubernetes we would expect further simplification and you're going to see more abstractions. We live in this world of almost perpetual abstractions. Now, as Kubernetes improves support from multi cluster it will be begin to treat those clusters as a unified group. So kind of abstracting multiple clusters and treating them as, as one to be managed together. And this is going to create a lot of ecosystem focus on scaling globally. Okay, once you do that, you're going to have to worry about latency and then you're going to have to keep pace with security as you expand the, the threat area. And then of course recovery what happens when something goes wrong, more complexity, the harder it is to recover and that's going to require new services to share resources across clusters. So look for that. You also should expect more automation. It's going to be driven by the host cloud providers as Kubernetes supports more state full applications and begins to extend its cluster management. Cloud providers will inject as much automation as possible into the system. Now and finally, as these capabilities mature we would expect to see better support for data intensive workloads like, AI and Machine learning and inference. Schedule with these workloads becomes harder because they're so resource intensive and performance management becomes more complex. So that's going to have to evolve. I mean, frankly, many of the things that Kubernetes team way back when, you know they back burn it early on, for example, you saw in Docker swarm or Mesos they're going to start to enter the scene now with Kubernetes as they start to sort of prioritize some of those more complex functions. Now, the last thing I'll ask you to think about is what's next beyond Kubernetes, you know this isn't it right with serverless and IOT in the edge and new data, heavy workloads there's something that's going to disrupt Kubernetes. So in that, by the way, in that CNCF survey nearly 40% of respondents were using serverless and that's going to keep growing. So how is that going to change the development model? You know, Andy Jassy once famously said that if they had to start over with Amazon retail, they'd start with serverless. So let's keep an eye on the horizon to see what's coming next. All right, that's it for now. I want to thank my colleagues, Stephanie Chan who helped research this week's topics and Alex Myerson on the production team, who also manages the breaking analysis podcast, Kristin Martin and Cheryl Knight help get the word out on socials, so thanks to all of you. Remember these episodes, they're all available as podcasts wherever you listen, just search breaking analysis podcast. Don't forget to check out ETR website @etr.ai. We'll also publish. We publish a full report every week on wikibon.com and Silicon angle.com. You can get in touch with me, email me directly david.villane@Siliconangle.com or DM me at D Vollante. You can comment on our LinkedIn post. This is Dave Vollante for theCUBE insights powered by ETR. Have a great week, everybody. Thanks for watching. Stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : Feb 12 2022

SUMMARY :

bringing you data driven and many of the players And that the beauty of this And so the beauty of this He saw the need to simplify It's the format in which A Docker at the time was a 30% company And so, the union of Docker and Kubernetes and said, you know, we And the odd thing to recognize is that, at the time. And so scaling that up, you and pressed hard to convince them and said, you know, please And he saw the need to And that's the stuff that Kubernetes and you know, some of the arm twisting in the first release of Kubernetes. of Google, the code was And the idea was, you know and dig in and peel the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie ChanPERSON

0.99+

Chris AniszczykPERSON

0.99+

HockinPERSON

0.99+

Dave VollantePERSON

0.99+

Solomon HykesPERSON

0.99+

Craig McLuckiePERSON

0.99+

Cheryl KnightPERSON

0.99+

Jerry ChenPERSON

0.99+

Alex MyersonPERSON

0.99+

Kristin MartinPERSON

0.99+

Brian GrantPERSON

0.99+

Eric BrewerPERSON

0.99+

1998DATE

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Tim HockinPERSON

0.99+

Andy JassyPERSON

0.99+

2013DATE

0.99+

Alex PolviPERSON

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Craig McLuckiePERSON

0.99+

Clayton ColemanPERSON

0.99+

2018DATE

0.99+

2014DATE

0.99+

IBMORGANIZATION

0.99+

50%QUANTITY

0.99+

JerryPERSON

0.99+

AppleORGANIZATION

0.99+

2012DATE

0.99+

Joe BedaPERSON

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

CNCFORGANIZATION

0.99+

17%QUANTITY

0.99+

John FurrierPERSON

0.99+

30%QUANTITY

0.99+

40%QUANTITY

0.99+

OracleORGANIZATION

0.99+

23%QUANTITY

0.99+

iOSTITLE

0.99+

1800 respondentsQUANTITY

0.99+

AlibabaORGANIZATION

0.99+

2015DATE

0.99+

39%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AirbnbORGANIZATION

0.99+

Hen GoldbergPERSON

0.99+

fourthQUANTITY

0.99+

twoQUANTITY

0.99+

Chad SakacPERSON

0.99+

threeQUANTITY

0.99+

david.villane@Siliconangle.comOTHER

0.99+

first projectQUANTITY

0.99+

CraigPERSON

0.99+

VMwareORGANIZATION

0.99+

ETRORGANIZATION

0.99+

Micah Coletti & Venkat Ramakrishnan | KubeCon + CloudNativeCon NA 2021


 

>>Mhm Welcome back to Los Angeles. The Cubans live, I can't say that enough. The Cubans live. We're at cu con cloud Native Con 21. We've been here all day yesterday and today and tomorrow talking with lots of gas. Really uncovering what's going on in the world of kubernetes, lisa martin here with Dave Nicholson. We've got some folks. Next we're gonna be talking about a customer use case, which is always one of my favorite things to talk about. Please welcome Michael Coletti, the principal platform engineer at CHG Healthcare and then cat from a christian VP of products from port works by pure storage. Guys, welcome to the program, Thank you. Happy to be here. Yeah. So Michael, first of all, let's go ahead and start with you, give the audience an overview of CHG healthcare. >>Yeah, so CHG Healthcare were a staffing company so we sure like a locum pen and so our clients are doctors and hospitals, so we help staff hospitals with temporary doctors or even permanent placing. So we deal with a lot of doctors, a lot of nursing and we're were a combination of multiple companies to see if she is the parents. So and uh yeah, we're known in the industry is one of the leaders in this, this field and providing uh hospitals with high quality uh doctors and nurses and uh you know, our customer services like number one and one of these are Ceos really focused on is now how do we make that more digital, how we provide that same level of quality of service, but a digital experience as rich for >>I can imagine there was a massive need for that in the last 18 months alone. >>Covid definitely really raised that awareness out for us and the importance of that digital experience and that we need to be out there in the digital market. >>Absolutely. So your customer report works by pure storage, we're gonna get into that. But then can talk to us about what's going on. The acquisition of port works by peer storage was about a year ago I talked to us about your VP of product, what's going on? >>Yeah, I mean, you know, first of all, I think I could not say how much of a great fit for a port works to be part of your storage. It's uh uh Pure itself is a very fast moving large start up that's a dominant leader in a flash and data center space. And you know, pure recognizes the fact that Cuban it is is the new operating system of the cloud is now how you know, it's kind of virtualizing the cloud itself and there is a, you know, a big burgeoning need for data management in communities and how you can kind of orchestrate work lords between your on prem data centers in the cloud and back. So port books fits right into the story as complete vision of data management for our customers and uh spend phenomenal or business has grown as part of being part of Pure and uh you know, we're looking at uh launching some new products as well and it's all exciting times. >>So you must have been pretty delighted to be acquired as a startup by essentially a startup because because although pure has reached significant milestones in the storage business and is a leader in flash storage still, that, that startup mindset is there, that's unique, that's not, that's not the same as being acquired by a company that's been around for 100 years seeking to revitalize >>itself. Can >>you talk a little bit about that >>aspect? So I think it will uh, Purest culture is highly innovation driven and it's a very open flat culture. Right? I mean everybody impure is accessible, it can easily have a conversation with folks and everybody has his learning mindset and Port works is and has always been in the same way. Right? So when you put these teams together, if we can create wonders, I mean we, right after that position, just within a few months we announced an integrated solution that Port works orchestrates volumes and she file shares in Pure flash products and then delivers as an integrated solution for our customers. And Pure has a phenomenal uh, cloud based monitoring and management system called Pure one that we integrated well into. Now we're bringing the power of all of the observe ability that Purest customers are used to for all of the partners customers and having super happy, you know, delivering that capability to our customers and our customers are delighted now they can have a complete view all the way from community is an >>app to the >>flash and I don't think any one company on the planet can even climb, they can do that. >>I think, I think it's fair to acknowledge that pure one was observe ability before observe ability was a word. Exactly one used regularly. So that's very interesting. >>I could talk to us about obviously you are a customer CHD as a customer of court works now Port works by peer storage. Talk to us about the use case, what what was the compelling? It was their compelling event and from a storage perspective that that led you to Port works in the >>first so we be, they began this our Ceo basically in the vision, we we need to have a digital presence, we need and hazards and this was even before Covid, so they brought me on board and my my manager read uh glass or he we basically had this task to how are we going to get out into the cloud, how we're going to make that happen And we we chose to follow very much cloud native strategy and the platform of choice. I mean it just made sense with kubernetes and so when we were looking at kubernetes, we're starting to figure out how we're doing, we knew that data is going to be a big factor, you know, um being to provide data, we're very much focused on an event driven, were really pushing to event driven architecture. So we leverage Kafka on top of kubernetes, but at the time we were actually leveraging Kafka with M S K down out in a W S and that was just a huge cost to us. So I came on board, I had experienced with poor works prior company before that and I basically said we need to figure out a great storage away overlay. And the only way to do is we gotta have high performance storage, we've got to have secure, we gotta be able to back up and recover that storage and the poor works was the right match and that allowed us to have a very smooth transition off of M S K onto kubernetes, saving us, it's a significant amount of money per month and just leverage that already existing hardware that are existing, compute memory and just in the and move right to port works, >>leveraging your existing investments. >>Exactly which is key. Very, very key. So, >>so been kept, how common are the challenges that when you guys came together with the HD, how common are the challenges? It's actually, >>that's a great question, you know, this is, I'll tell you the challenges that Michael and his team are running into is what we see a lot in the, in the industry where people pay a ton of money, you know, to, you know, to to other vendors or especially in some cases use some cloud native services, but they want to have control over the data. They want to control the cost and they want higher performance and they want to have, you know, there's also governance and regulatory things that they need to control better. So they want to kind of bring these services and have more control over them. Right? So now we will work very well with all of our partners including the cloud providers as well as uh, you know, an from several vendors and everybody but different customers are different kinds of needs and port works gives them the flexibility if you are a customer who want, you know, have a lot of control over your applications, the performance of the agency and want to control cars very well in leveraging existing investments board works can deliver that for you in your data center right now you can integrate it with pure slash and you get a complete solution or you won't run it in cloud and you still want to have leverage the agility of the cloud and scale for books delivers a solution for you as well. So it kind of not only protects their investment in future proves their architecture, you get future proving your architecture completely. So if you want to tear the cloud or burst the cloud, you have a great solution that you can continue to leverage >>when you hear a future proof and I'm a marketer. So I always go, I love to know what it means to different people, what does that mean to you in your environment? >>My environment. So a future proof means like one of the things we've been addressing lately, that's just a real big challenge and I'm sure it's a challenge in the industry, especially Q and A's is upgrading our clusters ability to actually maintain a consistent flow with how fast kubernetes is growing, you know, they they're out I think yes, we leverage eks so it's like 1 21 or 1 22 now, uh that effort to upgrade a cluster, it can be a daunting one with port works. We actually were able to make that to where we could actually spin up a brand new cluster and with port work shift, all our application services, data migrated completely over poor works, handles all that for us and stand up that new cluster in less than a day. And that effort, it would take us a week, two weeks to do so not even man hours the time spent there, but just the reliability of being able to do that and the cost, you know, instead of standing up a new cluster and configuring it and doing all that and spending all that time, we can just really, we move to what we call blue green cut over strategy and port works is an essential piece of that. >>So is it fair to say that there are a variety of ways that people approach port works from a, from a value perspective in terms of, I I know that one area that you are particularly good in is the area of backups in this environment, but then you get data management and there's a third kind of vector there. What is the third vector? >>Yeah, it's all of the data services. Data services, like for example, database as a service on any kubernetes cluster paid on your cloud or you're on from data centers, which >>data, what kind of databases >>you were talking about? Anything from Red is Kafka Postgres, my sequel, you know, council were supporting, we just announced something called port books, data services offering that essentially delivers all these databases as a service on any kubernetes cluster uh that that a customer can point to unless than kind of get the automated management of the database on day one to day three, the entire life cycle. Um you know, through regular communities, could curdle experience through Api and SDK s and a nice slick ui that they can, you know, just role based access control and all of that, that they can completely control their data and their applications through it. And, you know, that's the third vector of potatoes Africans >>like a question for you. So what works has been a part of peer storage? You've known it since obviously for several years before you were a c h G, you brought up to see H G, you now know it a year into being acquired by a fast paced startup. Talk to me about the relationship and some of the benefits that you're getting with port works as a part of pure storage. >>Well, I mean one of the things, you know, when, when I heard about the accusation, my first thing was I was a little bit concerned is that relationship going to change and when we were acquiring, when we're looking at a doctor and Poor works, One thing I would tell my management is poor works is not just a vendor that wants to throw a solution on you and provide some capability there, partner, they want to partner with you and your success in your journey and this whole cloud native journey to provide this rich digital experience for not only our platform engineering team, but our dev teams, but also be able to really accelerate the development of our services so we can provide that digital portal for our end users and that didn't change. If anything that accelerated that that relationship did not change. You know, I came to the cat with an issue we just, we're dealing with, he immediately got someone on the phone call with me and so that has not changed. So it's really exciting to see that now that they've been acquired that they still are very much invested in the success of their customers and making sure we're successful. You know, it's not all of a sudden I was worried I was gonna have to do a whole different support process and it's gonna go into a black hole didn't happen. They still are very much involved with their customers. And >>that sounds kind of similar to what you talked about with the cultural alignment I've known here for a long time and they're very customer centric. Sounds like one of the areas in which there was a very strong alignment with port works. >>Absolutely important works has always taken pride in being customer. First company. Our founders are heavily customer focused. Uh, you know, they are aligned. They want, they have always aligned uh, the portraits business to our customers needs. Uh Pure is a company that's men. I actually focused on customers, right? I mean, that's all, you know, purist founder cause and everybody care about and so, you know, bringing these companies together and being part of the pure team. I kind of see how synergistic it is. And you know, we have, you know, that has enabled us to serve our customers customers even better than before. >>So, I'm curious about the two of you personally, in terms of your histories, I'm going to assume that you didn't both just bounce out of high school into the world of kubernetes, right? So like lisa and I your spanning the generations between the world of, say, virtualization based on X 86 architecture and virtualization where you can have microservices, you have a full blown operating system that you're working with, that kind of talk about, you know, Michael with you first talk about what that's been like navigating that change. We were in the midst of that, Do you have advice for others that are navigating that change? >>Don't be afraid of it, you know, a lot of people want to, you know, I call it, we're moving from where we're uh naming, we still have cats and dogs, they have a name, the VMS either whether or not their physical boxes or their VMS to where it's more like it's a cattle, you know, it's like we don't own the Os and not to be afraid afraid of that because change is really good. You know, the ability for me to not have to worry about patching and operating system is huge, you know, where I can rely on someone like the chaos and and the version and allow them to, if CV comes out, they let me know I go and I use their tools to be able to upgrade. So I don't have to literally worry about owning that Os and continues the same thing. You know, you, you, you know, it's all about being fault tolerant, right? And being able to be changed where you can actually brought a new version of a container, a base image with a lot of these without having to go and catch a bunch of servers, I mean patch night was held, I'm sorry if I could say that, but it was a nightmare, you know, but this whole world has just been a game changer >>with that. So Van cut from your perspective, you were coming at it, going into a startup, looking at the landscape in the future and seeing opportunity, um what what what's that been like for you? I guess the question for you is more something lisa and I talk about this concept of peak kubernetes, where are we in the wave, is this just is this just the beginning, are we in the thick of it? >>Yeah, I think I would say we're kind of transitioning from earlier doctors too early majority face in the whole, you know, um crossing the chasm analogy. Right, so uh I would say we're still the early stages of this big wave that's going to transform how infrastructure is built, apps are, apps are built and managed and run in production. Um I think some of the uh pieces, the key pieces are falling in place and maturing, uh there are some other pieces like observe ability and security, uh you know, kind of edge use cases need to be, you know, they're kind of going to get a lot more mature and you'll see that the cloud as we know today and the apps as we know today, they're going to be radically different and you know, if you're not building your apps and your business on this modern platform, on this modern infrastructure, you're gonna be left behind. Um, you know, I, my wife's birthday was a couple of days ago. I was telling this story a couple of friends is that I r I used another flowers delivery website. Uh they missed delivering the flowers on the same day, right? So when they told me all kinds of excuses, then I just went and looked up, you know, like door dash, which delivers uh, you know, and then, you know, like your food, but there's also flower delivery, indoor dash and I don't do it, I door dash flowers to her and I can track the flower does all the way she did not eat them, okay, You need them. But my kids love the chocolates though. So, you know, the case in point is that you cannot be, you know, building a modern business without leveraging the moral toolchain and modern toolchain and how the business is going to be delivered. That that thing is going to be changing dramatically. And those kind of customer experience, if you don't deliver, uh, you're not gonna be successful in business and communities is the fundamental technology that enables these containers. It's a fundamental piece of technology that enables building new businesses, you know, modernizing existing businesses and the five G is gonna be, there's gonna be new innovations that's going to get unleashed. And uh, again, communities and containers enable us to leverage those. And so we're still scratching the surface on this, it's big now, it's going to be much, much bigger as we go to the next couple of years. >>Speaking of scratching the surface, Michael, take us out in the last 30 seconds or so with where CHG healthcare is on its digital transformation. How is port works facilitating that? >>So we're right in the thick of it. I mean we are we still have what we call the legacy, we're working on getting those. But I mean we're really moving forward um to provide that rich experience, especially with inventing driven platforms like Kafka and Kubernetes and partnering with port works is one of the key things for us with that and a W s along with that. But we're, and I remember I heard a talk and I can't, I can't remember me but he he talked about how, how kubernetes just sort of like 56 K. Modem, You're hearing it, see, but it's got to get to the point where it's just there, it's just the high speed internet and Kelsey Hightower, That's who Great. Yeah, and I really like that because that's true, you know, and that's where we're on that transition, where we're still early, it's still that 50. So you still want to hear a note, you still want to do cube Cto, you want to learn it the hard way and do all that fun stuff, but eventually it's gonna be where it's just, it's just there and it's running everything like five G. I mean stripped down doing Micro K. It's things like that, you know, we're gonna see it in a lot of other areas and just proliferate and really accelerate uh the industry and compute and memory and, and storage and >>yeah, a lot of acceleration guys, thank you. This has been a really interesting session. I always love digging into customer use cases how C H. G is really driving its evolution with port works Venkat. Thanks for sharing with us. What's going on with port works a year after the acquisition. It sounds like all good stuff. >>Thank you. Thanks for having us. It's been fun, our >>pleasure. Alright for Dave Nicholson. I'm lisa martin. You're watching the cube live from Los Angeles. This is our coverage of Yukon cloud native Con 21 mhm

Published Date : Oct 15 2021

SUMMARY :

So Michael, first of all, let's go ahead and start with you, high quality uh doctors and nurses and uh you know, importance of that digital experience and that we need to be out The acquisition of port works by peer storage was about a year ago I talked to us of Pure and uh you know, we're looking at uh launching some new products as well and it's you know, delivering that capability to our customers and our customers are delighted now they can have a complete view I think, I think it's fair to acknowledge that pure one was observe ability before observe ability I could talk to us about obviously you are a customer CHD as a customer of court works now Port works by peer storage. you know, um being to provide data, we're very much focused on an event driven, Very, very key. you know, have a lot of control over your applications, the performance of the agency and want to control cars what does that mean to you in your environment? with how fast kubernetes is growing, you know, they they're out I think yes, good in is the area of backups in this environment, but then you get data Yeah, it's all of the data services. and SDK s and a nice slick ui that they can, you know, for several years before you were a c h G, you brought up to see H G, you now know it a Well, I mean one of the things, you know, when, when I heard about the accusation, that sounds kind of similar to what you talked about with the cultural alignment I've known here for a long time And you know, we have, you know, So, I'm curious about the two of you personally, in terms of your histories, Don't be afraid of it, you know, a lot of people want to, you know, I call it, I guess the question for you is more something lisa and I talk about this concept of peak kubernetes, they're going to be radically different and you know, if you're not building your Speaking of scratching the surface, Michael, take us out in the last 30 seconds or so with where CHG Yeah, and I really like that because that's true, you know, and that's where we're on that transition, What's going on with port works a year after the acquisition. It's been fun, our This is our coverage of Yukon cloud native Con 21

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

Michael ColettiPERSON

0.99+

Dave NicholsonPERSON

0.99+

Micah ColettiPERSON

0.99+

Los AngelesLOCATION

0.99+

CHG HealthcareORGANIZATION

0.99+

two weeksQUANTITY

0.99+

lisa martinPERSON

0.99+

twoQUANTITY

0.99+

lisaPERSON

0.99+

a weekQUANTITY

0.99+

tomorrowDATE

0.99+

Venkat RamakrishnanPERSON

0.99+

less than a dayQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

CeosORGANIZATION

0.99+

first thingQUANTITY

0.99+

PurestORGANIZATION

0.98+

KubeConEVENT

0.98+

bothQUANTITY

0.98+

50QUANTITY

0.97+

pureORGANIZATION

0.97+

RedORGANIZATION

0.97+

CubansPERSON

0.97+

OneQUANTITY

0.96+

CloudNativeConEVENT

0.96+

oneQUANTITY

0.96+

CHGORGANIZATION

0.96+

Kelsey HightowerPERSON

0.96+

bigEVENT

0.95+

next couple of yearsDATE

0.94+

PureORGANIZATION

0.91+

ApiTITLE

0.91+

about a year agoDATE

0.9+

last 18 monthsDATE

0.9+

VenkatORGANIZATION

0.9+

KafkaTITLE

0.9+

third vectorQUANTITY

0.87+

one areaQUANTITY

0.87+

CHG healthcareORGANIZATION

0.87+

First companyQUANTITY

0.86+

M S KTITLE

0.86+

21 mhmQUANTITY

0.84+

NA 2021EVENT

0.83+

1OTHER

0.83+

couple of days agoDATE

0.83+

five GORGANIZATION

0.82+

a yearQUANTITY

0.82+

one companyQUANTITY

0.82+

coupleQUANTITY

0.79+

day threeQUANTITY

0.79+

Kafka PostgresORGANIZATION

0.78+

friendsQUANTITY

0.78+

a yearQUANTITY

0.77+

100 yearsQUANTITY

0.77+

Yukon cloud native ConEVENT

0.76+

last 30 secondsDATE

0.76+

waveEVENT

0.73+

CovidORGANIZATION

0.72+

dayQUANTITY

0.71+

KubernetesTITLE

0.7+

cu con cloudORGANIZATION

0.69+

ModemPERSON

0.67+

1 22OTHER

0.67+

third kindQUANTITY

0.66+

K.COMMERCIAL_ITEM

0.65+

SDKTITLE

0.62+

21OTHER

0.62+

OsCOMMERCIAL_ITEM

0.61+

C H. GCOMMERCIAL_ITEM

0.6+

a ton of moneyQUANTITY

0.6+

Dion Hinchcliffe, Constellation Research | CUBE Conversation, October 2021


 

(upbeat music) >> Welcome to this Cube conversation sponsored by Citrix. This is the third and final installment in the Citrix launchpad series. We're going to be talking about the launchpad series for work. Lisa Martin here with Dion Hinchcliffe, VP and principal analyst at Constellation research. Dion, welcome to the program. >> No, thanks Lisa. Great to be here. >> So we have seen a tremendous amount of change in the last 18, 19 months. You know, we saw this massive scatter to work from home a year and a half ago. Now we're in this sort of distributed environment. That's been persisting for a long time. Talk to me about, we're going to be talking about some of the things that Citrix is seeing and some of the things that they're doing to help individuals and teams, but give me your lens from Constellation's perspective. What are some of the major challenges with this distributed environment that you've seen? >> Sure. Well, so we've gone from this, you know, the world of work, the way that it was now, we're all very decentralized, you know, work from anywhere. Remote work is really dominating, you know, white collar types of activities in the workplace and workplaces that in our homes for most of us even today. But that started to change. Some people are going back. Although I just recently spoke to a panel of CIOs that says they have no plans anytime soon, but they're very aware that they need to have workable plans for when we start sending people back to the office and there's this big divide. How are we going to make sure that we have one common culture? We have a collaborative organization when, you know, a good percentage of our workers are in the office, but also maybe as much as half the organization is at home. And so, how to make processes seamless, how to make people collaborate and make sure there's equity and inclusion so that the people at home aren't left out and then people in the office, maybe you don't have an unfair advantage. So those are all the conversations. And of course, because this is a technology revolution, remote work was enabled by technology. We're literally looking at it again for this hybrid work, this, you know, this divided organization that we're going to have. >> You mentioned culture that's incredibly important, but also challenging to do with this distribution. I was looking at some research that Citrix provided, asking individuals from a productivity perspective, and two thirds said, hey, for our organizations that have given us more tools for collaboration and communication, yes, we are absolutely more productive. But the kicker is, the same amount of people, about two thirds that answered the survey said, we've now got about ten tools. So complexity is more challenging. It's harder to work individually. It's harder to work in teams. And so Citrix is really coming to the table here with the launchpad series for work, saying let's help these individuals and these teams, because as we, we think, and I'm sure you have insight Dion on this as well, this hybrid model that we're starting to see emerge is going to be persistent for a while. >> Yeah. For the foreseeable future. Cause we don't know what the future holds. So we'll have to hold the hybrid model as the primary model. And we may eventually go back to the way that we were. But for the next several years, there's going to be that. And so we're trying to wrap our arms around that. And I think that we're seeing with things like the Citrix announcements, a wave of responses saying, all right, let's really design properly for these changes. You know, we kind of just adapted quickly when everyone went to remote last year and now we're actually adding features to streamline, to reduce the friction, to simplify remote work, which does use, you have to use more applications. You have to switch between different things. You have to, you know, your employee experience in the digital world is just more cluttered and complicated, but it doesn't have to be. And so I, you know, we can look to some of these announcements for last year, I think address some of that. >> Let's break some of that down because to your point, it doesn't have to be complex complicated. It shouldn't be. Initially this scatter was, let's do everything we can to ensure that our teams and our people can be productive, can communicate, can collaborate. And now, since this is going to be persistent for quite some time, to your point, let's design for this distributed environment, this hybrid workforce of the future. Talk to me about the, one of the things that Citrix is doing with Citrix workspace, the app personalization, I can imagine as an individual contributor, but also as a team leader, the ability to customize this to the way that I work best is critical. >> And it really is, especially when you know, you have workers, you know, 18 or 19 months worth of new hires that you've never met. They don't really feel like, you know, this is maybe their organization. But if you allow them to shape it a little bit, make it contextual for them. So they don't just come into this cookie cutter digital experience that actually is kind of more meaningful for them. It makes it easier for them to get their job done and things are the way that they want them and where they want them. I think that makes a lot of sense. And so the app personalization announcements is important for remote workers in particular, but all workers to say, hey, can I start tailoring, you know, parts of my employee experience? So they make more sense for me. And I feel like I belong a little bit more. I think it's significant. >> It is. Let's talk about it from a security perspective though. We've seen massive changes in the security landscape in the last year and a half. We've seen some Citrix data that I was looking at, said between 2019 and 2020, ransomware up 435%, malware up 358%. And of course the weakest link being humans. Talk to me from a Citrix workspace perspective about some of the things that they've done to ensure that those security policies can be applied. >> Well, and the part that I really liked about the launchpad announcements around work in terms of security was this much more intelligent analysis. You know, one of the most frustrating things is you're trying to get work done remotely and maybe you're you're in crunch mode and all of a sudden the security system clamps down because they think you're doing something that, you know, you might be sharing information you shouldn't be and now you can't, get your deadline met. I really liked how the analytics inside the new security features really try to make sure they're applying intelligent analysis of behavior. And only when it's clear that a bad actor is in there doing something, then they can restrict access, protect information. And so I have no doubt they'll continue to evolve the product so that it's even even more effective in terms of how it can include or exclude bad actors from doing things inside your system. And so this is the kind of intelligence security increasingly based on AI type technologies that I think that will keep our workers productive, but clamp down on the much higher rate of that activity we see out there. Because we do have so many more endpoints there's a thousand or more times more endpoints in today's organizations because of remote work. >> Right. And one of the things that we've seen with ransomware, I mentioned those numbers that Citrix was sharing. It's gotten so much more personalized, so it's harder and harder to catch these things. One of the things that I found interesting, Dion, that from a secure collaboration perspective, that Citrix is saying is that, you know, we need to go, security needs to go beyond the devices and the endpoints and the apps that an employee is using, which of which we said, there are at least 10 apps that are being used today and it needs to actually be applied at a content level, the content creation level. Talk to me about your thoughts about that. >> I think that's exactly right. So if you know the profile of that worker and the types of things they normally do, and you see unusual behavior that is uncharacteristic to that worker, because you know their patterns, the types of content, the locations of that content that they might normally have access to. And if they're just accessing things, you know, periodically, that's usually not a problem. When they suddenly access a large volume of information and appear to be downloading it, those are the types of issues and especially of content they don't normally use for their work. Then you can intervene and take more intelligent actions as opposed to just trying to limit all content for example. So that knowledge workers can actually get access to all that great information in your IT systems. You can now give them access to it, but when clearly something, something bad is happening, the system automatically does it and steps in. >> I was looking at some of the data with respect to updates to Citrix analytics that it can now auto change permissions on shared files to read only, I think you alluded to this earlier, when it detects that excess sharing is going on. >> And, inappropriate access sharing. So sometimes it's okay for a worker to access, you know, documents. But the big fear is that a bad actor gets access. They get a USB key and they download a bunch of files and they get a whole bunch of IP or important knowledge. Well, when you have a system that's continually monitoring and you know, the unblinking gaze of Citrix security capabilities are looking at the patterns, not just the content alone or just the device alone, but at the, at the usage patterns and saying, I can make this read only because that's clearly the, you know, we don't want them to be able to download this because this activity is completely out of bounds or very unusual. >> Right. One of the things also that Citrix is doing is integrating with Microsoft teams. I was listening to a fun quiz show the other day that said, what were the top two apps downloaded in 2020? And I guessed one of them correctly, Tiktok though. I still don't know how to use it. And the second one was Zoom, and I'm sure Microsoft teams is way up there. I was looking at some stats that said, I think as of the spring of 2020, there were 145 million daily users of Microsoft teams. So that, from a collaboration perspective, something that a lot of folks are dependent on during the pandemic. And now within Teams, I can access Microsoft workspace? Citrix workspace. >> Yes. Well, and it's more significant than it sounds because there's a real hunger to find a center of gravity for the employee experience. What do I put that? Where should they be spending most of their time? Where should I be training them to focus most of their attention? And obviously workers collaborate a lot and Teams as part of Office 365, is a juggernaut? You know, the rise of it during the pandemic has been incredible. And just to show this, I have a digital workplace advisory board. Its companies who are heading, are the farthest along in designing digital employee experiences, and 31% of them said, this January, they're planning on centralizing the employee experience in Teams. Now, if you're a Citrix customer, you have workspace you go, how do I, I don't want to be left out. This announcement allows you to say, you can have the goodness of teams and its capabilities and the power of Citrix workspace, and you have them in one place and really creating a true center of gravity and simplifying and streamlining the employee experience. You don't have this fragmented pieces. Everything's right there in one place, in one pane of glass. And so I like this announcement. It brings Citrix up to parody with a lot of their competitors and actually eclipses several of them as well. So I really like to see this. >> So then from within teams, I can access Citrix workspace. I can share documents with team members and collaborate as well as that kind of the idea. >> Yes. That is the idea, and of course, they'll continue to evolve that, but now you can do your work in Citrix workspace and when documents are involved and you want to bring your team in, they're already right there inside that experience. >> That ability to streamline things, so critical, given the fact that we're still in this distributed environment, I'm sure families are still dealing with some, some amount of remote learning, or there's still distractions from the, do I live at work, do I work from home environment? One of the grips I really felt for when this happened, Dion, was the contact center. I thought these poor people, more people now with shorter and shorter fuses trying to get updates on whatever it was that they were, if they had something ordered and of course all the shipping delays. And the contact center of course went (blowing sound) scattered as well. And we've got people working from home, trying to do their jobs. Talk to me about some of those things that Citrix is doing to enable with Google, those contact center workers to have a good experience so that ultimately the employee experience is good, so is the customer experience? >> The contact center worker has the toughest of all of the different employee profiles I've seen, they have the most they have to learn, the most number of applications. They're typically not highly skilled workers. So they might only just have a, you know, high school education. Yet, they're being asked to cram all of these technologies, each one with a different employee experience, and they don't stay very long as a result of that. You might train them for two months before they're effective and they only stay for six months on average. And so, both businesses really want to be able to streamline onboarding and provisioning a and getting them set up and effective. And they want it too, if you want happy contact center workers making your customers happy and staying around. And so this announcements really allows you to deploy pre-configured Citrix workspaces on, on Chrome OS so that, you know, if you need to field a whole bunch of workers or you have a big dose say you're a relief company and you have a lot of disaster care workers. You can certainly this issue that these devices very easily, they're ready to go with their employee experience and all the right things in place so they can be effective with the least amount of effort. So I guess, it's a big step forward for a worker that is often neglected and underserved. >> Right. Definitely often neglected. And you, you brought up a good point there. And one of the things that, that peaked in my mind, as you talked about, you know, the onboarding experience, the retention, well, these contact center folks are the front lines to the customer. So from a brand reputation perspective, that's on the line, for companies in every industry where people with short fuses are dealing with contact center folks. So the ability to onboard them to give them a much more seamless experience is critical for the brand reputation, customer retention for every industry, I would imagine. >> Absolutely. Especially when you're setting up a contact center or you have a new product launching and you want, you know, you've got to bring, onboard all these new workers, you can do it, and they are going to have the least challenges. They're going to be ready to go right out of the box, be able to receive their package, with their device and their Citrix employee experience, ready to go. You know, just turn the machine on and they're off to the races. And that's the vision and that's the right one. So I was glad to see that as well. >> Yeah. Fantastic. One of the things also that Citrix did, the Citrix workspace app builder, so that Citrix workspace can now be a system of record for certain things like collaboration, surveys, maybe even COVID-19 information, that system of record. Talk to me about why that's so critical for the distributed worker. >> So we've had this, this longstanding challenge in that we've had our systems of record, you know, these are CRM systems, ERP, things like that, which we use to run our business. And then we've had our collaboration tools and they're separate, even though we're collaborating on sales deals and we're collaborating on our supply chain. And so like, the team's announcement was in the same game. We can say, let's close that gap between our systems of record and our collaboration tools. Well, this announcement says, all right, well, we still have these isolated systems of record. How can we streamline them to build and start connecting together a little bit so that we have processes that might cross all of those things, right? It's still going to order comes in from the CRM system. Then you can complete it in the, in the ERP system, you know, ordering that product for them. So they actually get it. You know, and that's probably overkill, that scenario for this particular example. But for example, collecting data from workers saying, let's build some forms and collect some data and then feed it to this process, or this system record. You can do it much more easily than before, before you would have to hire a development team or a contractor to develop another system that would integrate, you know, CRM or ERP or whatever. Now you can do it very quickly inside that builder. First simple, basic applications, and get a lot of the low hanging fruit off your plate and more automated inside of your Citrix workspace. >> And automation has been one of the keys that we've seen to streamlining worker productivity in the last 18 months. Another thing that I was looking at is, you know, the fact that we have so many different apps and we're constantly switching apps, context is constantly changing. Is this sort of system of record going to allow or reduce the amount of context switching that employees have to do? >> Yep. Almost all of these announcements have some flavor to that saying, can we start bringing more systems together in one place? So you're not switching between applications. You don't have different and disconnected sets of data that if you need to, and if they are disconnected, you can connect them, right. That's what the app builder announcement again is about saying, all right, if you're already, always using these three applications to do something, and you're switching between them, maybe you can just build something that connect them into one experience and, you know, maybe a low level of IT person, or even a business user can do that. That's the big trend right now. >> That's so important for that continued productivity, as things will continue to be a little bit unstable, I guess, for awhile. One more thing that I saw that Citrix is announcing is integrations with, Wrike I've been a Wrike user myself. I like to have program project management tools that I can utilize to keep track of projects, but they've done a number of integrations, one of them with Wrike Signature, which I thought was really cool. So for, to secure e-signature within Wrike, based on a program or a project that you're working on. Talk to me about some of the boosts to Wrike that they've done and how you think that's going to be influential in the employee experience. >> Well, first let's just say that the Wrike acquisition was a really important one for Citrix to go above just the basic digital workplace and simple systems of record. This is a really a mass collaboration tool for managing work itself. And so they're, this is taking Citrix up the stack in the more sophisticated work scenarios. And, and when you, we are in more sophisticated work scenarios, you want to be able to pull in different data sets. So, you know, they have the Citrix ShareFile support. You want to be able to bring in really important things like, you know, signing contracts or signing sales deals or mortgage applications, or all sorts of exciting things that actually run in your business. And so, Wrike Signatures, support's really important so that when you have key processes that involve people putting signatures on documents, you can just build collaborative work management flows that, that take all that into account without having to leave the experience. Everything's in one place as much as possible. And this is the big push and we need to have all these different systems. We don't have too many apps. What we have is too many touchpoints, so lets start combining some of these. And so the Wrike integrations, really help you do that. >> Well, and ultimately it seems like what Citrix is doing with the work launchpad series. All the announcements here is really helping workers to work how and where they want to work. Which is very similar to what we say when we're talking about the end user customer experience. When tech companies like Citrix say, we have to meet our customers where they are, it sounds like that's the same thing that's happening here. >> It is. And I would just add on top of that and to make it all safe. So you can bring all these systems together, work from anywhere, and you can feel confident that you're going to do so securely and safely. And it's that whole package I think that's really critical here. >> You're right, I'm glad you brought up that security. All right, Dion take out your crystal ball for me. As we wrap things up, you're saying, you know, going into the future, we're going to be moving from this distributed workforce to this hybrid. What are some of the things that you see as really critical happening in the next six to nine months? >> Well, there's a real push to say, we need to bring in all the workers that we've hired over the last year. Maybe not bringing them in, in person, but can we use these collaborative tools and technologies to bring them, hold them closer so they get to know us. And so, you know, things like, having Microsoft teams integrated right into your Citrix workspace makes it easier for you to collaborate with remote workers and inside any process wherever you are. So whether you're in the office or not, it should bring workers closer, especially those remote ones that are at risk of being left out as they move to hybrid work. And then it's really important. And so the things like the app builder are going to also allow building those connections. And I think that workers and businesses are really going to try and build those bridges, because the number one thing I'm hearing from business leaders and IT leaders is, is it, you know, we're worried about splitting into two different organizations, the ones that are remote and the ones that are in the office and any way that we can bring all of them together in an easy way, in a natural way, situate the digital employee experience so that we really back or back to one company, one common culture, everybody has equal access and equity to the employee experience. That's going to be really important. And I think that Citrix launchpad announcements around work really are a step, a major step in the right direction for that. There's still more things that have to be done and all, all vendors are working on that. But it's nice to see. I really liked what Citrix is doing here to move the ball forward towards where we're all going. >> It is nice to see, and those connections are critically important. I happen to be at an in-person event last week, and several folks had just had been hired during the pandemic and just got to meet some of their teams. So in terms of, of getting that cultural alignment, once again, this is a great step towards that. Dion thank you for joining me on the program, talking about the Citrix launchpad series for work, all the great new things that they're announcing and sharing with us as some of the things that you see coming down the pike. We appreciate your time. >> Thanks Lisa, for having me. >> For Dion Hinchcliffe. I'm Lisa Martin. You're watching this Cube conversation. (upbeat music)

Published Date : Oct 12 2021

SUMMARY :

in the Citrix launchpad series. Great to be here. about some of the things that and inclusion so that the and I'm sure you have And so I, you know, the ability to customize this And so the app And of course the weakest and all of a sudden the And one of the things that and appear to be downloading it, I think you alluded to this earlier, and you know, And the second one was Zoom, and you have them in one place I can share documents with and you want to bring your team in, and of course all the shipping delays. and all the right things in place So the ability to onboard and they are going to One of the things also that Citrix did, and get a lot of the low that employees have to do? that if you need to, and of the boosts to Wrike And so the Wrike integrations, it sounds like that's the same that and to make it all safe. happening in the next six to nine months? And so the things like the all the great new things that (upbeat music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dion HinchcliffePERSON

0.99+

DionPERSON

0.99+

LisaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

six monthsQUANTITY

0.99+

2020DATE

0.99+

October 2021DATE

0.99+

18QUANTITY

0.99+

145 millionQUANTITY

0.99+

two monthsQUANTITY

0.99+

spring of 2020DATE

0.99+

last yearDATE

0.99+

19 monthsQUANTITY

0.99+

FirstQUANTITY

0.99+

thirdQUANTITY

0.99+

GoogleORGANIZATION

0.99+

WrikeORGANIZATION

0.99+

CitrixORGANIZATION

0.99+

31%QUANTITY

0.99+

last weekDATE

0.99+

Chrome OSTITLE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

one placeQUANTITY

0.98+

one paneQUANTITY

0.98+

one companyQUANTITY

0.98+

todayDATE

0.98+

firstQUANTITY

0.98+

three applicationsQUANTITY

0.97+

second oneQUANTITY

0.97+

a year and a half agoDATE

0.97+

2019DATE

0.97+

both businessesQUANTITY

0.97+

CitrixTITLE

0.97+

Office 365TITLE

0.97+

one placeQUANTITY

0.96+

one experienceQUANTITY

0.96+

last year and a halfDATE

0.96+

halfQUANTITY

0.94+

two different organizationsQUANTITY

0.94+

about two thirdsQUANTITY

0.94+

each oneQUANTITY

0.91+

Shira Rubinoff | CUBE Conversation, October 2021


 

(upbeat music) >> Welcome to this CUBE conversation. I'm Dave Nicholson and we are recapping the Citrix launchpad series. This series presents announcements on LinkedIn live on a variety of subjects, specifically cloud, security, and work. Three topics that I think all of us are keenly aware of going through the last 18 months of the pandemic. Citrix has taken time to sort of regroup and look at ways that security can be improved so that it isn't a hindrance for members of staff, but instead offers a unified integrated way of dealing with security across all of the variety of situations we find ourselves in today. Everything from a mobile device in a cafe through actually working back in the office when we get the opportunity to, to accessing information on a company issued laptop in a home office, secured networks, unsecured networks, secured browsers, unsecured browsers, the permutations are nearly endless. So Citrix has taken an interesting point of view, starting from the perspective of zero trust, meaning everything must be authenticated. They apply contextualism to their strategies. So the context and the posture of the access, the device, the location, all of those matter so that security protocols are tailored to help enhance productivity and security instead of, again, being a hindrance. So I highly recommend you go to the Citrix launchpad site dedicated to security. Two senior Citrix execs, Tim and Joe, will go through great detail on the announcements, but let's recap a little bit from an overview perspective. The first is this idea of secure private access. You combine that with secure internet access, and now you have a package that allows this contextual security posture that can change and adapt based upon varying conditions. Additionally, they have announced a partnership with Google where all of these capabilities are built into the Chrome OS. So now you have a device level native support for these protocols. They're also talking about bot management as something that is critical to security, moving forward. Bots out fishing, you want to kill them. You don't want them getting into your system, but there are some bots that are okay that have poking around in your environment. So again, go into the details with Tim and Joe. Having said that, I am delighted to have a very special guest here. Friend of theCUBE, veteran of theCUBE, author, advisor, author of the book, Cyber Minds and Tech Executive, Shira Rubinoff, is going to join us in just a moment. (upbeat music) Hello, and welcome to this special CUBE conversation. I'm Dave Nicholson, and we are recapping the Citrix launchpad series with a focus on the topic of security. Now, whenever we're going to talk security on theCUBE, we have a CUBE veteran and smartest person on cybersecurity that we know, Shira Rubinoff. She's a cybersecurity executive author and advisor, specifically author of the excellent book on the subject, Cyber Minds. Shira, welcome back to theCUBE. >> Thank you. Pleasure to be here. >> How are you today? >> Doing great, always great to be on theCUBE and talk to you folks and certainly be part of something from Citrix. >> Well, that might be the last pleasant thing that we say, because we are surrounded by security threats. So are you ready to get serious? >> Oh, always with a smile, serious with a smile. >> So, one kind of overriding question that a lot of people have now, if you're an IT executive, you've experienced a complete change in the world from so many different angles, but how has the pandemic changed the way you think of security? What are the dynamics at play, things that are different now that we couldn't have anticipated maybe two or three years ago? >> Interesting question. Certainly, if we look at the scope and the ecosystem of the way that organizations operated, it was pretty much in the high 90% of people being in the office with just the few percentage being working from home. And that had to shift literally overnight to literally the flip side of it, having the multitude of the organization work from home, work remotely, and maybe the few people that had to be in the office were there. So all of a sudden organizations were left with this, how do we secure down our organization? How do we keep our employees safe? How do we keep our organization safe? How do we connect to the outside world? What do we do to maintain the proper cyber? That's call it cyber hygiene within an organization. And that's a topic that I talk about quite frequently. When you look at cybersecurity as a whole, we look at the cyber posture of an organization. We also have to break it down and say, what does an organization need to do to be fully cyber secure? So of course, the ongoing training and that had to shift as well. We have now training for the organization and employees, but also think about the consumers and who else is interacting with organizations. We have to switch how that is done. And that has to be ongoing in the global awareness, the cybersecurity of course is at top of mind. And then that would lead to us to zero trust. Zero trust is a massive, massive piece of cybersecurity need for organizations. We think about it as who needs the data is king. Whoever has the data, they rule the world. They own the organization, they do what they need to do. Zero trust, limited access, knowledge of who gets in, why they get in, the need to get in, and the need for that within organization. So zero trust is a very key component and Citrix is very focused on as well. We talk about updated security and patching and all that has to happen, think about remotely. So not only are we thinking about all these topics, we have to think about them going at warp speed with people that might be working remote, who also have other things they have to take care of. Maybe they're taking care of elderly parents, maybe they're having to watch their kids on zoom, making sure they're staying on zoom, and all sorts of things with school, and other maybe roommates who are working for other organizations, not having important information in the backgrounds of their zoom while they're having these important conversations with organizations. But also think about the multiple devices people are using. They may have an area that's set up properly in order to do their work, but then again, they have to be in another room at the same time. Oh, let me just grab my device. So the whole area of the multiple devices, the warp speed of working and not, let's call this pause. And this is one of the key elements that I would tell all organizations to stop and pause, to think about what you're doing before you do it. Give the headaches, but that was not interplayed when the height of the pandemic. The height of the pandemic, we were worried about what's going on? Need knowledge of information, where we're getting this information, downloading it, clicking on links. Then we're working at the same time, taking care of people. So all these things are happening simultaneously, leaving these open vectors for the tax surface to be that much more heightened for the bad actors to get in. >> So, you advise some of the largest companies in the world on this subject, and obviously you're not going to reveal any names or specifics, but as a general overall view from your perspective, how are we doing right now? Is the average large organization now sort of back on cruise control, having figured everything out for this new reality? On a scale of 1 to 10, how well are we executing against all of these changes? >> That's a great question. Let me talk about the global whole. I think organizations are actually doing really well. I think there was a quick ramp up to figure out how to get it done, but because of also the shift of sharing of information that some of these largest companies across the world, they came together to share information with bad actors, to share information about the tax, to share information about what to do if something happens, who's out there and buying together almost like a whole. So it wasn't each finger on its own. It's a hand as a whole looking at it from a stronger perspective. So I think that shift coupled with the fact of the knowledge and understanding of what companies needed to do in terms of locking down the organization, but also allowing and helping their employees, empowering them to get their work done, but get it done in a secure safe fashion. And I believe now, obviously, we all know, they obviously, but the ransomware attacks are now prevalent and they're becoming even more intense with the rise of 5G, a way that attacks could happen, the warp speed. We're now having to understand that being reactive is not enough, being proactive is something that is wonderful to see organizations are doing as well. It used to be okay, let's be reactive. If something happens, what do we do? Let's have a plan in place. But that's not good enough and we've seen that happen because these attacks are coming a warp speed. So the proactivity of these organizations that they've taken is applaudable in general. I can't talk for all the companies, but the ones that I've been consulting to and have interactions with, I'm pleasantly surprised and not surprised as well, that the way that they've taken their cyber posture so seriously, and where they focus in, not only on the organization as a whole, but their employees as individuals, what their needs are and being able to give them what they need to do their jobs well. >> Yeah, that makes sense. You can almost think of it like cybersecurity is a team sport and to the extent that all of that proactive work that an organization can do can be absolutely undermined if we don't do our parts as endpoints, as endpoint people. And when someone reads Cyber Minds, I think there's an undercurrent that I definitely sensed. And then when I looked more closely into your background, I realized that, yes, in fact, you do have a background in psychology. I want to shift to a question along that line, if you don't mind. Thinking about the psychology of people who have lived through the pandemic, this concept of our personal hygiene and our personal security has been in the forefront of our mind. As you leave the house, and there's hand sanitizer and masks and maybe gloves, we're very, very aware of this. How has that affected us from a cybersecurity team sport perspective? Has that made us better players on the field? What are your thoughts in that regard? >> I actually love that question. As we saw the pandemic heightened, everyone became hyper aware of their own personal, what's called cleanliness. And in terms of where they are, what they're doing, if they're masking, if they're putting on gloves, the sanitizers are everywhere, six feet apart. Everybody's thinking about that. It's a forefront. It became a way of life. And if you then do you shift that and you're saying, okay, let's look at the technology or the cybersecurity part of it, your own personal safety, your own personal cybersecurity. I think we failed a lot in that area. I think because of the fact, if you think about the human psychology and the pieces that people needed to know information, everybody was hungry for the latest and greatest information. What's going on? What are the stats? How many people? Just terrible, terrible pandemic with so many people getting sick. So many people dying and wanting to know, what is going on? what are the latest rule sets? What can I do? What else can I do to protect myself? What is my business doing? So we also had bad actors sending out the phishing attacks, heightened tremendously. There is information being sent out, click here for the latest here. This is Dr. Fauchi, his latest report. Everything going out there was not necessarily to help us, but to hurt us. And because of people's human psychology of thinking, I need to protect myself, so I need the information. The stop and pause is, is this the right information? Is this a safe place to go? But then there's also the other flip side of, if I'm not interacting, I'm not there. Think about the different generational people we have going on. Gen Z, millennials, all sorts of it. Everybody's all over social media. And everybody needs to and wants to have a presence there, certainly in this world. So putting out lots of information and being present was very critical 'cause people weren't in-person anymore. So people were interacting online, whether it being on social, whether it being telling people where they're going, what they're doing, what they're eating, what their favorite animal is, all sorts of things that they were doing. But they were giving over personal information that made have be utilized as passwords or ways to get to know somebody, to either do a spear phishing attack or any types of attacks to gather information to hurt, not just a personal to steal money or to steal someone's identity or to come in and hurt the company, but information was everywhere. So we were taking care of our personal cleanliness, but our cyber hygiene with our psychologies aspect of cybersecurity itself, I think took a big dive. And I think that people started becoming aware as these attack surfaces grew. There were also different types of attacks that were happening where phone calls were coming in and saying, somebody is breaking into your bank account. Just verify yourself, give me the last four digits. I need to know who you are. So playing on the human psyche of fear, somebody is trying to get you nervous. So what are you going to do? You're going to act quickly without thinking. Or all sorts of, I think we were talking earlier about extended warranties for different things. That also grew extensively, but how did they do that? They were gathering information, personal information to give you something you want. So if you're playing again on the human psychology of people, when people get what they want, they're more likely to give over something they may not give to somebody else anyway. And one of my biggest example or a strong example is back in the day with Candy Crush. If you think about that game, before you sign up for that game, you literally have to give over your kidney. You're giving over access to your camera, to your contacts. If you look back at the permissions you are giving, it's really unbelievable that everybody was clicking yes, because they wanted to play a game. So take that example and transfer that into real life. We were doing the same thing. So the importance of brushing up on that personal cyber hygiene and really understanding what people needed to do to heighten their own security themselves, less sharing on social, not giving over information that they shouldn't, not allowing a trusted source who isn't really a trusted source into it. Having strong zero trust, not just organizations, but for yourself was very important. >> Yeah now, did we, Chuck. Chuck's my producer. Did we get Shira's social security number and her date of birth? Shira, can you give us that? >> Sure, it's 555-55-5555. >> Excellent Aha, phishing attack. >> There you go, go for it. (laughs) >> So you think there could be a little bit of security fatigue that might come into play when we're thinking of living up to our responsibilities as those end points? >> I think there was just fatigue in general and people were tired of being locked in the house. People were tired of having everybody under the same roof all the time, 24/7. Trying to get work done, trying to get school done, taking care of people, what they needed to do, having groceries delivered, going into groceries, all the thoughts that they had to do that was just a way of life before that we all took for granted during the pandemic. It was just a whole shift. People were just antsy, jumpy. We needed to connect and we need to connect in any way we could. So all these open vectors became a problem that ended up hurting us rather than helping us. So this has been something that was a big mind shift as a pandemic continued. People started realizing what was going on and organizations took a good stand on educating the population and telling them, look, these are the things that are happening. This is what we need to do. Certainly a lot of the companies I'm working with did such a great job with that. Giving their employees the wherewithal of wanting to connect, but doing in a secure manner. Giving them the tools of what they needed to do personal, only also in their personal lives, not just for their work lives. So that was helpful too. And as we're coming out of it, hopefully continue to come completely out of it, we'll see the shift back into, let's take that stop and pause. Let's think what we're doing. >> Yeah, well, we are all looking back to whatever resemblance of normal we can get to. Shira, I can spend hours picking your brain on a variety of subjects. Unfortunately, we are coming to the end of our time together. Do you promise to come back? >> Certainly, a big fan of theCUBE. >> Well, fantastic. Shira Rubinoff, thank you so much for your time. This is Dave Nicholson with a very special CUBE conversation, signing out. Thanks for watching. >> Shira: Thank you too. (gentle music)

Published Date : Oct 8 2021

SUMMARY :

across all of the variety of situations Pleasure to be here. and talk to you folks Well, that might be the last Oh, always with a smile, and that had to shift as well. but the ones that I've been consulting to and to the extent that I need to know who you are. and her date of birth? There you go, go for it. all the thoughts that they had to do to whatever resemblance Shira Rubinoff, thank you Shira: Thank you too.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

TimPERSON

0.99+

Shira RubinoffPERSON

0.99+

ShiraPERSON

0.99+

ChuckPERSON

0.99+

FauchiPERSON

0.99+

GoogleORGANIZATION

0.99+

JoePERSON

0.99+

90%QUANTITY

0.99+

Candy CrushTITLE

0.99+

October 2021DATE

0.99+

LinkedInORGANIZATION

0.99+

Three topicsQUANTITY

0.99+

1QUANTITY

0.99+

six feetQUANTITY

0.99+

CitrixORGANIZATION

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

each fingerQUANTITY

0.98+

Chrome OSTITLE

0.98+

555-55-5555OTHER

0.98+

todayDATE

0.98+

10QUANTITY

0.97+

pandemicEVENT

0.97+

CUBEORGANIZATION

0.97+

zero trustQUANTITY

0.95+

twoDATE

0.93+

ZeroQUANTITY

0.93+

Two seniorQUANTITY

0.93+

last 18 monthsDATE

0.91+

Cyber Minds and Tech ExecutiveTITLE

0.9+

theCUBEORGANIZATION

0.86+

three years agoDATE

0.85+

peopleQUANTITY

0.75+

CyberTITLE

0.73+

four digitsQUANTITY

0.69+

many peopleQUANTITY

0.59+

elementsQUANTITY

0.57+

CUBEEVENT

0.51+

zeroQUANTITY

0.48+

Amanda Silver, Microsoft | DockerCon 2021


 

>>Welcome back to the cubes coverage of dr khan 2021. I'm john for your host of the cube. We're here with Amanda Silver, corporate vice president, product developer division at Microsoft. Amanda, Great to see you you were on last year, Dr khan. Great to see you again a full year later were remote. Thanks for coming on. I know you're super busy with build happening this week as well. Thanks for making the time to come on the cube for Dr khan. >>Thank you so much for having me. Yeah, I'm joining you like many developers around the globe from my personal home office, >>developers really didn't skip a beat during the pandemic and again, it was not a good situation but developers, as you talked about last year on the front lines, first responders to creating value quite frankly, looking back you were pretty accurate in your prediction, developers did have an impact this year. They did create the kind of change that really changed the game for people's lives, whether it was developing solutions from a medical standpoint or even keeping systems running from call centres to making sure people got their their their goods or services and checks and and and kept sanity together. So. >>Yeah absolutely. I mean I think I think developers you know get the M. V. P. Award for this year because you know at the end of the day they are the digital first responders to the first responders and the pivot that we've had to make over the past year in terms of supporting remote telehealth, supporting you know online retail, curbside pickup. All of these things were done through developers being the ones pushing the way forward remote learning. You know my kids are learning at home right behind me right now so you might hear them during the interview that's happening because developers made that happen. >>I don't think mom please stop hogging the band with, they've got a gigabit. Stop it. Don't be streaming. My kids are all game anyway, Hey, great to have you on and you have to get the great keynote, exciting to see you guys continue the collaboration with Docker uh with GIT hub and Microsoft, A great combination, it's a 123 power punch of value. You guys are really kind of killing it. We heard from scott and dan has been on the cube. What's your thoughts on the partnership with the developer division team at Microsoft with Doctor, What's it all about this year? What's the next level? >>Well, I mean, I think, I think what's really awesome about this partnership is that we all have, we all are basically sharing a common mission. What we want to do is make sure that we're empowering developers, that we're focused on their productivity and that we're delivering value to them so they can do their job better so that they can help others. So that's really kind of what drives us day in and day out. So what we focus on is developer productivity. And I think that's a lot of what dana was talking about in her session, the developer division. Specifically, we really try to make sure that we're improving the state of the art from modern developers. So we want to make sure that every keystroke that they take, every mouse move that they make, it sounds like a song but every every one of those matter because we want to make sure that every developers writing the code that only they can write and in terms of the partnership and how that's going. You know my team and the darker team have been collaborating a ton on things like dr desktop and the Doctor Cli tool integrations. And one of the things that we do is we think about pain points and various workflows. We want to make sure that we're shaving off the edges of all of the user experience is the developers have to go through to piece all of these applications together. So one of the big pain points that we have heard from developers is that signing into the Azure cloud and especially our sovereign clouds was challenging. So we contributed back to uh back to doctor to actually make it easier to sign into these clouds. And so dr developers can now use dr desktop and the Doctor Cli to actually change the doctor context so that its Azure. So that makes it a lot easier to connect the other. Oh, sorry, go ahead. No, I was just >>going to say, I love the reference of the police song. Every breath you take, every >>mouth moving. Great, >>great line there. Uh, but I want to ask you while you're on this modern cloud um, discussion, what is I mean we have a lot of developers here at dr khan. As you know, you guys know developers in your ecosystem in core competency. From Microsoft, Kublai khan is a very operator like focus developed. This is a developer conference. You guys have build, what is the state of the art for a modern cloud developer? Could you just share your thoughts because this comes up a lot. You know, what's through the art? What's next jan new guard guard? It's his legacy. What is the state of the art for a modern cloud developer? >>Fantastic question. And extraordinarily relevant to this particular conference. You know what I think about often times it's really what is the inner loop and the outer loop look like in terms of cycle times? Because at the end of the day, what matters is the time that it takes for you to make that code change, to be able to see it in your test environment and to be able to deploy it to production and have the confidence that it's delivering the feature set that you need it to. And it's, you know, it's secure, it's reliable, it's performance, that's what a developer cares about at the end of the day. Um, at the same time, we also need to make sure that we're growing our team to meet our demand, which means we're constantly on boarding new developers. And so what I take inspiration from our, some of the tech elite who have been able to invest significant amounts in, in tuning their engineering systems, they've been able to make it so that a new developer can join a team in just a couple of minutes or less that they can actually make a code change, see that be reflected in their application in just a few seconds and deploy with confidence within hours. And so our goal is to actually be able to take that state of the art metric and democratize that actually bring it to as many of our customers as we possibly can. >>You mentioned supply chain earlier in securing that. What are you guys doing with Docker and how to make that partnership better with registries? Is there any update there in terms of the container registry on Azure? >>Yeah, I mean, you know, we, we we have definitely seen recent events and and it almost seems like a never ending attacks that that you know, increasingly are getting more and more focused on developer watering holes is how we think about it. Kind of developers being a primary target um for these malicious hackers. And so what it's more important than ever that every developer um and Microsoft especially uh really take security extraordinarily seriously. Our engineers are working around the clock to make sure that we are responding to every security incident that we hear about and partnering with our customers to make sure that we're supporting them as well. One of the things that we announced earlier this week at Microsoft build is that we've actually taken, get have actions and we've now integrated that into the Azure Security Center. And so what this means is that, you know, we can now do things like scan for vulnerabilities. Um look at things like who is logging in, where things like that and actually have that be tracked in the Azure security center so that not just your developers get that notification but also your I. T. Operations. Um In terms of the partnership with dR you know, this is actually an ongoing partnership to make sure that we can provide more guidance to developers to make sure that they are following best practices like pulling from a private registry like Docker hub or at your container registry. So I expect that as time goes on will continue to more in partnership in this space >>and that's going to give a lot of confidence. Actually, productivity wise is going to be a big help for developers. Great stuff is always good, good progress. They're moving the needle. >>Last time we >>spoke we talked about tools and setting Azure as the doctor context duty tooling updates here at dot com this year. That's notable. >>Yeah, I mean, I think, you know, there's one major thing that we've been working on which has a big dependency on docker is get help. Code space is now one of the biggest pain points that developers have is setting up a new DEV box, which they often have to do when they are on boarding a new employee or when they're starting a new project or even if they're just kicking the tires on a new technology that they want to be able to evaluate and sometimes creating a developer environment can actually take hours um and especially when you're trying to create a developer environment that matches somebody else's developer environment that can take like a half a day and you can spend all of your time just debugging the differences in environment variables, for example, um, containers actually makes that much easier. So what you can do with this, this services, you can actually create death environment spun up in the cloud and you can access it in seconds and you get from there are working coding environment and a runtime environment and this is repeatable via containers. So it means that there's no inadvertent differences introduced by each DEV. And you might be interested to know that underneath this is actually using Docker files and dr composed to orchestrate the debits and the runtime bits for a whole bunch of different stacks. And so this is something that we're actually working on in collaboration with the with the doctor team to have a common the animal format. And in fact this week we actually introduced a couple of app templates so that everybody can see this all in action. So if you check out a ca dot m s forward slash app template, you can see this in action yourself. >>You guys have always had such a strong developer community and one thing I love about cloud as it brings more agility, as we always talk about. But when you start to see the enterprise grow into, the direction is going now, it's almost like the developer communities are emerging, it's no longer about all the Lennox folks here and the dot net folks there, you've got windows, you've got cloud, >>it's almost >>the the the solidification of everyone kind of coming together. Um and visual studio, for instance, last year, I think you were talking about that to having to be interrogated dr composed, et cetera. >>How do you see >>this melting pot emerging? Because at the end of the day, you pick the language you love and you got devops, which is infrastructure as code doesn't matter. So give us your take on where we are with that whole progress of of making that happen. >>Well, I mean I definitely think that, you know, developer environments and and kind of, you know, our approach to them don't need to be as dogmatic as they've been in the past. I really think that, you know, you can pick the right tool and language and stand developer stack for your team, for your experience and you can be productive and that's really our goal. And Microsoft is to make sure that we have tools for every developer and every team so that they can build any app that they want to want to create. Even if that means that they're actually going to end up ultimately deploying that not to our cloud, they're going to end up deploying it to AWS or another another competitive cloud. And so, you know, there's a lot of things that we've been doing to make that really much easier. We have integrated container tools in visual studio and visual studio code and better cli integrations like with the doctor context that we had talked about a little bit earlier. We continue to try to make it easier to build applications that are targeting containers and then once you create those containers it's much easier to take it to another environment. One of the examples of this kind of work is now that we have WsL and the Windows subsystem for Lennox. This makes it a lot easier for developers who prefer a Windows operating system as their environment and maybe some tools like Visual Studio that run on Windows, but they can still target Lennox with as their production environment without any impedance mismatch. They can actually be as productive as they would be if they had a Linux box as their Os >>I noticed on this session, I got to call this out. I want to get your reaction to it interesting. Selection of Microsoft talks, the container based development. Visual studio code is one that's where you're going to show some some some container action going on with note and Visual Studio code. And then you get the machine learning with Azure uh containers in the V. S. Code. Interesting how you got, you know, containers with V. S. And now you've got machine learning. What does that tell the world about where Microsoft's at? Because in a way you got the cutting edge container management on one side with the doctor integration. Now you get the machine learning which everyone's talking about shifting, left more automation. Why are these sessions so important? Why should people attend? And what's the what's the bottom line? >>Well, like I said, like containers basically empower developer productivity. Um that's what creates the reputable environments, that's what allows us to make sure that, you know, we're productive as soon as we possibly can be with any text act that we want to be able to target. Um and so that's kind of almost the ecosystem play. Um it's how every developer can contribute to the success of others and we can amor ties the kinds of work that we do to set up an environment. So that's what I would say about the container based development that we're doing with both visual studio and visual studio code. Um in terms of the machine learning development, uh you know, the number of machine learning developers in the world is relatively small, but it's growing and it's obviously a very important set of developers because to train a machine learning uh to train an ml model, it actually requires a significant amount of compute resources, and so that's a perfect opportunity to bring in the research that are in a public cloud. Um What's actually really interesting about that particular develop developer stack is that it commonly runs on things like python. And for those of you who have developed in python, you know, just how difficult it is to actually set up a python environment with the right interpreter, with the right run time, with the right libraries that can actually get going super quickly, um and you can be productive as a developer. And so it's actually one of the hardest, most challenging developer stacks to actually set up. And so this allows you to become a machine learning developer without having to spend all of your time just setting up the python runtime environment. >>Yeah, it's a nice, nice little call out on python, it's a double edged sword. It's easier to sling code around on one hand, when you start getting working then you gotta it gets complicated can get well. Um Well the great, great call out there on the island, but good, good, good project. Let me get your thoughts on this other tool that you guys are talking about project tie. Uh This is interesting because this is a trend that we're seeing a lot of conversations here on the cube about around more too many control planes. Too many services. You know, I no longer have that monolithic application. I got micro micro applications with microservices. What the hell is going on with my services? >>Yeah, I mean, I think, you know, containers brought an incredible amount of productivity in terms of having repeatable environments, both for dev environments, which we talked about a lot on this interview already, but also obviously in production and test environments. Super important. Um and with that a lot of times comes the microservices architecture that we're also moving to and the way that I view it is the microservices architecture is actually accompanied by businesses being more focused on the value that they can actually deliver to customers. And so they're trying to kind of create separations of concerns in terms of the different services that they're offering, so they can actually version and and kind of, you know, actually improve each of these services independently. But what happens when you start to have many microservices working together in a SAS or in some kind of aggregate um service environment or kind of application environment is it starts to get unwieldy, it's really hard to make it so that one micro service can actually address another micro service. They can pass information back and forth. And you know what used to be maybe easy if you were just building a client server application because, you know, within the server tear all of your code was basically contained in the same runtime environment. That's no longer the case when every microservices actually running inside of its own container. So the question is, how can we improve program ability by making it easier for one micro service that's being used in an application environment, be to be able to access another another service and kind of all of that context. Um and so, you know, you want to be able to access the service is the the api endpoint, the containers, the ingress is everything, make everything work together as though it felt just as easy as as um you know, server application development. Um And so what this means as well is that you also oftentimes need to get all of these different containers running at the same time and that can actually be a challenge in the developer and test loop as well. So what project tie does is it improves the program ability and it actually allows you to just write a command like thai run so that you can actually in stan she ate all of these containers and get them up and running and basically deploy and run your application in that environment and ultimately make the dev testing or loop much faster >>than productivity gain. Right. They're making it simple to stand up. Great, great stuff. Let me ask you a question as we kind of wrap down here for the folks here at Dakar Con, are >>there any >>special things you'd like to talk about the development you think are important for the developers here within this space? It's very dynamic. A lot of change happening in a good way. Um, but >>sometimes it's hard to keep >>track of all the cool stuff happening. Could you take a minute to, to share your thoughts on what you think are the most important develops developments in this space? That that might be interesting to ducker con attendees. >>I think the most important things are to recognize that developer environments are moving to containerized uh, environments themselves so that they can be repeated, they can be shared, the work, configuring them can be amortized across many developers. That's important thing. Number one important thing. Number two is it doesn't matter as much what operating system you're running as your chrome, you know, desktop. What matters is ultimately the production environment that you're targeting. And so I think now we're in a world where all of those things can be mixed and matched together. Um and then I think the next thing is how can we actually improve microservices, uh programming development together um so that it's easier to be able to target multiple micro services that are working in aggregate uh to create a single service experience or a single application. And how do we improve the program ability for that? >>You know, you guys have been great supporters of DACA and the community and open source and software developers as they transform and become quite frankly the superheroes for the transformation, which is re factoring businesses. So this has been a big thing. I'd love to get your thoughts on how this is all coming together inside Microsoft, you've got your division, you get the developer division, you got GIT hub, got Azure. Um, and then just historically, and he put this up last year army of an ecosystem. People who have been contributing encoding with Microsoft and the partners for many, many decades. >>Yes. The >>heart Microsoft now, how's it all working? What's the news? I get Lincoln, Lincoln, but there's no yet developer model there yet, but probably is soon. >>Um Yeah, I mean, I think that's a pretty broad question, but in some ways I think it's interesting to put it in the context of Microsoft's history. You know, I think when I think back to the beginning of my career, it was kind of a one stack shop, you know, we was all about dot net and you know, of course we want to dot net to be the best developer environment that it can possibly be. We still actually want that. We still want that need to be the most productive developer environment. It could we could possibly build. Um but at the same time, I think we have to recognize that not all developers or dot net developers and we want to make sure that Azure is the most productive cloud for developers and so to do that, we have to make sure that we're building fantastic tools and platforms to host java applications, javascript applications, no Js applications, python applications, all of those things, you know, all of these developers in the world, we want to make sure it can be productive on our tools and our platforms and so, you know, I think that's really kind of the key of you know what you're speaking of because you know, when I think about the partnership that I have with the GIT hub team or with the Azure team or with the Azure Machine learning team or the Lincoln team, um A lot of it actually comes down to helping empower developers, improving their productivity, helping them find new developers to collaborate with, um making sure that they can do that securely and confidently and they can basically respond to their customers as quickly as they possibly can. Um and when, when we think about partnering inside of Microsoft with folks like linkedin or office as an example, a lot of our partnership with them actually comes down to improving their colleagues efficiency. We build the developer tools that office and lengthen are built on top of and so every once in a while we will make an improvement that has, you know, 5% here, 3% there and it turns into an incredible amount of impact in terms of operations, costs for running these services. >>It's interesting. You mentioned earlier, I think there's a time now we're living in a time where you don't have to be dogmatic anymore, you can pick what you like and go with it. Also that you also mentioned just now this idea of distributed applications, distributed computing. You know, distributed applications and microservices go really well together. Especially with doctor. >>Can you share >>your thoughts on the framework that you guys released called Dapper? >>Yeah, yeah. We recently released Dapper. It's called D A P R. You can look it up on GIT hub and it's a programming model for common microservices pattern, two common microservices patterns that make it really easy and automatic to create those kinds of microservices. So you can choose to work with your favorite state stores or databases or pub sub components and get things like cloud events for free. You can choose either http or g R B C so that you can get mesh capabilities like service discovery and re tries and you can bring your own secret store and easily be able to call it from any environment variable. It's also like I was talking about earlier, multi lingual. Um so you don't need to embrace dot net, for example, as you're programming language to be able to benefit from Dapper, it actually supports many programming languages and Dapper itself is actually written and go. Um and so, you know, all developers can benefit from something like Dapper to make it easier to create microservices applications. >>I mean, always great to have you on great update. Take a minute to give an update on what's going on with your division. I know you had to build conference this week. V. S has got the new preview title. We just talked about what are the things you want to get to plug in for? Take a minute to get to plug in for what you're working on, your goals, your objectives hiring, give us the update. >>Yeah, sure. I mean, you know, we we built integrated container tools in visual studio uh and the Doctor extension and Visual Studio code and cli extensions. Uh and you know, even in this most recent release of our Visual Studio product, Visual Studio 16 10, we added some features to make it easier to use DR composed better. So one of the examples of this is that you can actually have uh Oftentimes you need to be able to use multiple doctor composed files together so that you can actually configure various different container environments for a single single application. But it's hard sometimes to create the right Yeah. My file so that you can actually invoke it and invoke the the container and the micro services that you need. And so what this allows you to do is to actually have just a menu of the different doctor composed files so that you can select the runtime and test environment that you need for the subset of the portion of the application that you're working on at the end of the day. This is always about developer productivity. You know, like I said, every keystroke matters. Um and we want to make sure that you as a developer can focus on the code that only you can Right. >>Amanda Silver, corporate vice president product development division of Microsoft. Always great to see you and chat with you remotely soon. We'll be back in in real life with real events soon as we come out of the pandemic and thanks for sharing your insight and congratulations on your success this year and and congratulations on your announcement here at Dakar Gone. >>Thank you so much for having me. >>Okay Cube coverage for Dunkirk on 2021. I'm John for your host of the Cube. Thanks for watching. Mhm

Published Date : May 28 2021

SUMMARY :

Amanda, Great to see you you were on last year, Dr khan. Yeah, I'm joining you like many developers around the globe quite frankly, looking back you were pretty accurate in your prediction, developers did have an impact V. P. Award for this year because you know at the end of the day they are the digital first My kids are all game anyway, Hey, great to have you on and you have to get the great keynote, exciting to see you guys and the Doctor Cli to actually change the doctor context so that its Azure. Every breath you take, every Great, you guys know developers in your ecosystem in core competency. Because at the end of the day, what matters is the time that it takes for you to make that What are you guys doing with Docker and how to make that partnership better with Um In terms of the partnership with dR you know, and that's going to give a lot of confidence. spoke we talked about tools and setting Azure as the doctor context duty So what you can do with this, this services, you can actually create death But when you start to see the enterprise grow into, studio, for instance, last year, I think you were talking about that to having to be interrogated dr composed, Because at the end of the day, you pick the language you love easier to build applications that are targeting containers and then once you create And then you get the machine learning with the machine learning development, uh you know, the number of machine learning developers around on one hand, when you start getting working then you gotta it gets complicated can get well. Um And so what this means as well is that you also oftentimes need to Let me ask you a question as we kind of wrap down here for the folks here at Dakar Con, the developers here within this space? Could you take a minute to, to share your thoughts on what you think are the most I think the most important things are to recognize that developer environments are moving to You know, you guys have been great supporters of DACA and the community and open source and software developers What's the news? that has, you know, 5% here, 3% there and it You mentioned earlier, I think there's a time now we're living in a time where you don't have to be dogmatic anymore, You can choose either http or g R B C so that you can get mesh capabilities I mean, always great to have you on great update. So one of the examples of this is that you can actually Always great to see you and chat with you remotely I'm John for your host of the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenePERSON

0.99+

Eric HerzogPERSON

0.99+

James KobielusPERSON

0.99+

Jeff HammerbacherPERSON

0.99+

DianePERSON

0.99+

IBMORGANIZATION

0.99+

Mark AlbertsonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

JenniferPERSON

0.99+

ColinPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

Rob HofPERSON

0.99+

UberORGANIZATION

0.99+

Tricia WangPERSON

0.99+

FacebookORGANIZATION

0.99+

SingaporeLOCATION

0.99+

James ScottPERSON

0.99+

ScottPERSON

0.99+

Ray WangPERSON

0.99+

DellORGANIZATION

0.99+

Brian WaldenPERSON

0.99+

Andy JassyPERSON

0.99+

VerizonORGANIZATION

0.99+

Jeff BezosPERSON

0.99+

Rachel TobikPERSON

0.99+

AlphabetORGANIZATION

0.99+

Zeynep TufekciPERSON

0.99+

TriciaPERSON

0.99+

StuPERSON

0.99+

Tom BartonPERSON

0.99+

GoogleORGANIZATION

0.99+

Sandra RiveraPERSON

0.99+

JohnPERSON

0.99+

QualcommORGANIZATION

0.99+

Ginni RomettyPERSON

0.99+

FranceLOCATION

0.99+

Jennifer LinPERSON

0.99+

Steve JobsPERSON

0.99+

SeattleLOCATION

0.99+

BrianPERSON

0.99+

NokiaORGANIZATION

0.99+

EuropeLOCATION

0.99+

Peter BurrisPERSON

0.99+

Scott RaynovichPERSON

0.99+

RadisysORGANIZATION

0.99+

HPORGANIZATION

0.99+

DavePERSON

0.99+

EricPERSON

0.99+

Amanda SilverPERSON

0.99+

Jamie Thomas, IBM | IBM Think 2021


 

>> Narrator: From around the globe, it's the CUBE with digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome back to IBM Think 2021, the virtual edition. This is the CUBEs, continuous, deep dive coverage of the people, processes and technologies that are really changing our world. Right now, we're going to talk about modernization and what's beyond with Jamie Thomas, general manager, strategy and development, IBM Enterprise Security. Jamie, always a pleasure. Great to see you again. Thanks for coming on. >> It's great to see you, Dave. And thanks for having me on the CUBE is always a pleasure. >> Yeah, it is our pleasure. And listen, we've been hearing a lot about IBM is focused on hybrid cloud, Arvind Krishna says we must win the architectural battle for hybrid cloud. I love that. We've been hearing a lot about AI. And I wonder if you could talk about IBM Systems and how it plays into that strategy? >> Sure, well, it's a great time to have this discussion Dave. As you all know, IBM Systems Technology is used widely around the world, by many, many 1000s of clients in the context of our IBM System Z, our power systems and storage. And what we have seen is really an uptake of monetization around those workloads, if you will, driven by hybrid cloud, the hybrid cloud agenda, as well as an uptake of Red Hat OpenShift, as a vehicle for this modernization. So it's pretty exciting stuff, what we see as many clients taking advantage of OpenShift on Linux, to really modernize these environments, and then stay close, if you will, to that systems of record database and the transactions associated with it. So they're seeing a definite performance advantage to taking advantage of OpenShift. And it's really fascinating to see the things that they're doing. So if you look at financial services, for instance, there's a lot of focus on risk analytics. So things like fraud, anti money laundering, mortgage risk, types of applications being done in this context, when you look at our retail industry clients, you see also a lot of customer centricity solutions, if you will, being deployed on OpenShift. And once again, having Linux close to those traditional LPARs of AIX, I-Series, or in the context of z/OS. So those are some of the things we see happening. And it's quite real. >> Now, you didn't mention power, but I want to come back and ask you about power. Because a few weeks ago, we were prompted to dig in a little bit with the when Arvind was on with Pat Kessinger at Intel and talking about the relationship you guys have. And so we dug in a little bit, we thought originally, we said, oh, it's about quantum. But we dug in. And we realized that the POWER10 is actually the best out there and the highest performance in terms of disaggregating memory. And we see that as a future architecture for systems and actually really quite excited about it about the potential that brings not only to build beyond system on a chip and system on a package, but to start doing interesting things at the Edge. You know, what do you what's going on with power? >> Well, of course, when I talked about OpenShift, we're doing OpenShift on power Linux, as well as Z Linux, but you're exactly right in the context for a POWER10 processor. We couldn't be more we're so excited about this processor. First of all, it's our first delivery with our partner Samsung with a seven nanometer form factor. The processor itself has only 18 billion transistors. So it's got a few transistors there. But one of the cool inventions, if you will, that we have created is this expansive memory region as part of this design point, which we call memory inception, it gives us the ability to reach memory across servers, up to two petabytes of memory. Aside from that, this processor has generational improvements and core and thread performance, improved energy efficiency. And all of this, Dave is going to give us a lot of opportunity with new workloads, particularly around artificial intelligence and inferencing around artificial intelligence. I mean, that's going to be that's another critical innovation that we see here in this POWER10 processor. >> Yeah, processor performance is just exploding. We're blowing away the historical norms. I think many people don't realize that. Let's talk about some of the key announcements that you've made in quantum last time we spoke on the qubit for last year, I think we did a deeper dive on quantum. You've made some announcements around hardware and software roadmaps. Give us the update on quantum please. >> Well, there is so much that has happened since we last spoke on the quantum landscape. And the key thing that we focused on in the last six months is really an articulation of our roadmaps, so the roadmap around hardware, the roadmap around software, and we've also done quite a bit of ecosystem development. So in terms of the roadmap around hardware, we put ourselves out there we've said we were going to get to over 1000 qubit machine and in 2023, so that's our milestone. And we've got a number of steps we've outlined along that way, of course, we have to make progress, frankly, every six months in terms of innovating around the processor, the electronics and the fridge associated with these machines. So lots of exciting innovation across the board. We've also published a software roadmap, where we're articulating how we improve a circuit execution speeds. So we hope, our plan to show shortly a 100 times improvement in circuit execution speeds. And as we go forward in the future, we're modifying our Qiskit programming model to not only allow a easily easy use by all types of developers, but to improve the fidelity of the entire machine, if you will. So all of our innovations go hand in hand, our hardware roadmap, our software roadmap, are all very critical in driving the technical outcomes that we think are so important for quantum to become a reality. We've deployed, I would say, in our quantum cloud over, you know, over 20 machines over time, we never quite identify the precise number because frankly, as we put up a new generation machine, we often retire when it's older. So we're constantly updating them out there, and every machine that comes on online, and that cloud, in fact, represents a sea change and hardware and a sea change in software. So they're all the latest and greatest that our clients can have access to. >> That's key, the developer angle you got redshift running on quantum yet? >> Okay, I mean, that's a really good question, you know, as part of that software roadmap in terms of the evolution and the speed of that circuit execution is really this interesting marriage between classical processing and quantum processing and bring those closer together. And in the context of our classical operations that are interfacing with that quantum processor, we're taking advantage of OpenShift, running on that classical machine to achieve that. And once again, if, as you can imagine, that'll give us a lot of flexibility in terms of where that classical machine resides and how we continue the evolution the great marriage, I think that's going to that will exist that does exist and will exist between classical computing and quantum computing. >> I'm glad I asked it was kind of tongue in cheek. But that's a key thread to the ecosystem, which is critical to obviously, you know, such a new technology. How are you thinking about the ecosystem evolution? >> Well, the ecosystem here for quantum is infinitely important. We started day one, on this journey with free access to our systems for that reason, because we wanted to create easy entry for anyone that really wanted to participate in this quantum journey. And I can tell you, it really fascinates everyone, from high school students, to college students, to those that are PhDs. But during this journey, we have reached over 300,000 unique users, we have now over 500,000 unique downloads of our Qiskit programming model. But to really achieve that is his back plane by this ongoing educational thrust that we have. So we've created an open source textbook, around Qiskit that allows organizations around the world to take advantage of it from a curriculum perspective. We have over 200 organizations that are using our open source textbook. Last year, when we realized we couldn't do our in person programming camps, which were so exciting around the world, you can imagine doing an in person programming camp and South Africa and Asia and all those things we did in 2019. Well, we had just like you all, we had to go completely virtual, right. And we thought that we would have a few 100 people sign up for our summer school, we had over 4000 people sign up for our summer school. And so one of the things we had to do is really pedal fast to be able to support that many students in this summer school that kind of grew out of our proportions. The neat thing was once again, seeing all the kids and students around the world taking advantage of this and learning about quantum computing. And then I guess that the end of last year, Dave, to really top this off, we did something really fundamentally important. And we set up a quantum center for historically black colleges and universities, with Howard University being the anchor of this quantum center. And we're serving 23 HBCUs now, to be able to reach a new set of students, if you will, with STEM technologies, and most importantly, with quantum. And I find, you know, the neat thing about quantum is is very interdisciplinary. So we have quantum physicist, we have electrical engineers, we have engineers on the team, we have computer scientists, we have people with biology and chemistry and financial services backgrounds. So I'm pretty excited about the reach that we have with quantum into HBCUs and even beyond right I think we can do some we can have some phenomenal results and help a lot of people on this journey to quantum and you know, obviously help ourselves but help these students as well. >> What do you see in people do with quantum and maybe some of the use cases. I mean you mentioned there's sort of a connection to traditional workloads, but obviously some new territory what's exciting out there? >> Well, there's been a really a number of use cases that I think are top of mind right now. So one of the most interesting to me has been one that showed us a few months ago that we talked about in the press actually a few months ago, which is with Exxon Mobil. And they really started looking at logistics in the context of Maritime shipping, using quantum. And if you think of logistics, logistics are really, really complicated. Logistics in the face of a pandemic are even more complicated and logistics when things like the Suez Canal shuts down, are even more complicated. So think about, you know, when the Suez Canal shut down, it's kind of like the equivalent of several major airports around the world shutting down and then you have to reroute all the traffic, and that traffic and maritime shipping is has to be very precise, has to be planned the stops are plan, the routes are plan. And the interest that ExxonMobil has had in this journey is not just more effective logistics, but how do they get natural gas shipped around the world more effectively, because their goal is to bring energy to organizations into countries while reducing CO2 emissions. So they have a very grand vision that they're trying to accomplish. And this logistics operation is just one of many, then we can think of logistics, though being a being applicable to anyone that has a supply chain. So to other shipping organizations, not just Maritime shipping. And a lot of the optimization logic that we're learning from that set of work also applies to financial services. So if we look at optimization, around portfolio pricing, and everything, a lot of the similar characteristics will also go be applicable to the financial services industry. So that's one big example. And I guess our latest partnership that we announced with some fanfare, about two weeks ago, was with the Cleveland Clinic, and we're doing a special discovery acceleration activity with the Cleveland Clinic, which starts prominently with artificial intelligence, looking at chemistry and genomics, and improve speed around machine learning for all of the the critical healthcare operations that the Cleveland Clinic has embarked on but as part of that journey, they like many clients are evolving from artificial intelligence, and then learning how they can apply quantum as an accelerator in the future. And so they also indicated that they will buy the first commercial on premise quantum computer for their operations and place that in Ohio, in the the the years to come. So it's a pretty exciting relationship. These relationships show the power of the combination, once again, of classical computing, using that intelligently to solve very difficult problems. And then taking advantage of quantum for what it can uniquely do in a lot of these use cases. >> That's great description, because it is a strong connection to things that we do today. It's just going to do them better, but then it's going to open up a whole new set of opportunities. Everybody wants to know, when, you know, it's all over the place. Because some people say, oh, not for decades, other people say I think it's going to be sooner than you think. What are you guys saying about timeframe? >> We're certainly determined to make it sooner than later. Our roadmaps if you note go through 2023. And we think the 2023 is going to will be a pivotal year for us in terms of delivery around those roadmaps. But it's these kind of use cases and this intense working with these clients, 'cause when they work with us, they're giving us feedback on everything that we've done, how does this programming model really help me solve these problems? What do we need to do differently? In the case of Exxon Mobil, they've given us a lot of really great feedback on how we can better fine tune all elements of the system to improve that system. It's really allowed us to chart a course for how we think about the programming model in particular in the context of users. Just last week, in fact, we announced some new machine learning applications, which these applications are really to allow artificial intelligence users and programmers to get take advantage of quantum without being a quantum physicist or expert, right. So it's really an encapsulation of a composable elements so that they can start to use, using an interface allows them to access through PyTorch into the quantum computer, take advantage of some of the things we're doing around neural networks and things like that, once again, without having to be experts in quantum. So I think those are the kind of things we're learning how to do better, fundamentally through this co-creation and development with our quantum network. And our quantum network now is over 140 unique organizations and those are commercial, academic, national laboratories and startups that we're working with. >> The picture started become more clear, we're seeing emerging AI applications, a lot of work today in AI is in modeling. Over time, it's going to shift toward inference and real time and practical applications. Everybody talks about Moore's law being dead. Well, in fact, the yes, I guess, technically speaking, but the premise or the outcome of Moore's law is actually accelerating, we're seeing processor performance, quadrupling every two years now, when you include the GPU along with the CPU, the DSPs, the accelerators. And so that's going to take us through this decade, and then then quantum is going to power us, you know, well beyond who can even predict that. It's a very, very exciting time. Jamie, I always love talking to you. Thank you so much for coming back on the CUBE. >> Well, I appreciate the time. And I think you're exactly right, Dave, you know, we talked about POWER10, just for a few minutes there. But one of the things we've done in POWER10, as well as we've embedded AI into every core that processor, so you reduce that latency, we've got a 10 to 20 times improvement over the last generation in terms of artificial intelligence, you think about the evolution of a classical machine like that state of the art, and then combine that with quantum and what we can do in the future, I think is a really exciting time to be in computing. And I really appreciate your time today to have this dialogue with you. >> Yeah, it's always fun and it's of national importance as well. Jamie Thomas, thanks so much. This is Dave Vellante with the CUBE keep it right there our continuous coverage of IBM Think 2021 will be right back. (gentle music) (bright music)

Published Date : May 12 2021

SUMMARY :

it's the CUBE with digital of the people, processes and technologies the CUBE is always a pleasure. and how it plays into that strategy? and the transactions associated with it. and talking about the that we have created is of the key announcements And the key thing that we And in the context of the ecosystem evolution? And so one of the things we and maybe some of the use cases. And a lot of the optimization to things that we do today. of the things we're doing going to power us, you know, like that state of the art, and it's of national importance as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jamie ThomasPERSON

0.99+

Pat KessingerPERSON

0.99+

Dave VellantePERSON

0.99+

Cleveland ClinicORGANIZATION

0.99+

JamiePERSON

0.99+

SamsungORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Exxon MobilORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

DavePERSON

0.99+

Jamie ThomasPERSON

0.99+

10QUANTITY

0.99+

2019DATE

0.99+

ExxonMobilORGANIZATION

0.99+

100 timesQUANTITY

0.99+

OhioLOCATION

0.99+

AsiaLOCATION

0.99+

Last yearDATE

0.99+

2023DATE

0.99+

Howard UniversityORGANIZATION

0.99+

last weekDATE

0.99+

ArvindPERSON

0.99+

last yearDATE

0.99+

South AfricaLOCATION

0.99+

Suez CanalLOCATION

0.99+

over 300,000 unique usersQUANTITY

0.99+

IntelORGANIZATION

0.99+

23 HBCUsQUANTITY

0.99+

QiskitTITLE

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

MoorePERSON

0.99+

Z LinuxTITLE

0.99+

over 200 organizationsQUANTITY

0.99+

LinuxTITLE

0.98+

over 4000 peopleQUANTITY

0.98+

first deliveryQUANTITY

0.98+

OpenShiftTITLE

0.98+

Think 2021COMMERCIAL_ITEM

0.97+

over 140 unique organizationsQUANTITY

0.97+

FirstQUANTITY

0.97+

seven nanometerQUANTITY

0.97+

over 20 machinesQUANTITY

0.97+

pandemicEVENT

0.97+

18 billion transistorsQUANTITY

0.97+

oneQUANTITY

0.96+

20 timesQUANTITY

0.96+

day oneQUANTITY

0.95+

over 500,000 unique downloadsQUANTITY

0.95+

one big exampleQUANTITY

0.94+

Think 2021COMMERCIAL_ITEM

0.93+

100 peopleQUANTITY

0.93+

about two weeks agoDATE

0.92+

over 1000 qubitQUANTITY

0.9+

I-SeriesCOMMERCIAL_ITEM

0.87+

z/OSTITLE

0.85+

six monthsQUANTITY

0.82+

few months agoDATE

0.8+

POWER10TITLE

0.79+

upQUANTITY

0.78+

PyTorchTITLE

0.78+

few weeks agoDATE

0.78+

1000s of clientsQUANTITY

0.76+

BOS19 Jamie Thomas VTT


 

(bright music) >> Narrator: From around the globe, it's the CUBE with digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome back to IBM Think 2021, the virtual edition. This is the CUBEs, continuous, deep dive coverage of the people, processes and technologies that are really changing our world. Right now, we're going to talk about modernization and what's beyond with Jamie Thomas, general manager, strategy and development, IBM Enterprise Security. Jamie, always a pleasure. Great to see you again. Thanks for coming on. >> It's great to see you, Dave. And thanks for having me on the CUBE is always a pleasure. >> Yeah, it is our pleasure. And listen, we've been hearing a lot about IBM is focused on hybrid cloud, Arvind Krishna says we must win the architectural battle for hybrid cloud. I love that. We've been hearing a lot about AI. And I wonder if you could talk about IBM Systems and how it plays into that strategy? >> Sure, well, it's a great time to have this discussion Dave. As you all know, IBM Systems Technology is used widely around the world, by many, many 1000s of clients in the context of our IBM System Z, our power systems and storage. And what we have seen is really an uptake of monetization around those workloads, if you will, driven by hybrid cloud, the hybrid cloud agenda, as well as an uptake of Red Hat OpenShift, as a vehicle for this modernization. So it's pretty exciting stuff, what we see as many clients taking advantage of OpenShift on Linux, to really modernize these environments, and then stay close, if you will, to that systems of record database and the transactions associated with it. So they're seeing a definite performance advantage to taking advantage of OpenShift. And it's really fascinating to see the things that they're doing. So if you look at financial services, for instance, there's a lot of focus on risk analytics. So things like fraud, anti money laundering, mortgage risk, types of applications being done in this context, when you look at our retail industry clients, you see also a lot of customer centricity solutions, if you will, being deployed on OpenShift. And once again, having Linux close to those traditional LPARs of AIX, I-Series, or in the context of z/OS. So those are some of the things we see happening. And it's quite real. >> Now, you didn't mention power, but I want to come back and ask you about power. Because a few weeks ago, we were prompted to dig in a little bit with the when Arvind was on with Pat Kessinger at Intel and talking about the relationship you guys have. And so we dug in a little bit, we thought originally, we said, oh, it's about quantum. But we dug in. And we realized that the POWER10 is actually the best out there and the highest performance in terms of disaggregating memory. And we see that as a future architecture for systems and actually really quite excited about it about the potential that brings not only to build beyond system on a chip and system on a package, but to start doing interesting things at the Edge. You know, what do you what's going on with power? >> Well, of course, when I talked about OpenShift, we're doing OpenShift on power Linux, as well as Z Linux, but you're exactly right in the context for a POWER10 processor. We couldn't be more we're so excited about this processor. First of all, it's our first delivery with our partner Samsung with a seven nanometer form factor. The processor itself has only 18 billion transistors. So it's got a few transistors there. But one of the cool inventions, if you will, that we have created is this expansive memory region as part of this design point, which we call memory inception, it gives us the ability to reach memory across servers, up to two petabytes of memory. Aside from that, this processor has generational improvements and core and thread performance, improved energy efficiency. And all of this, Dave is going to give us a lot of opportunity with new workloads, particularly around artificial intelligence and inferencing around artificial intelligence. I mean, that's going to be that's another critical innovation that we see here in this POWER10 processor. >> Yeah, processor performance is just exploding. We're blowing away the historical norms. I think many people don't realize that. Let's talk about some of the key announcements that you've made in quantum last time we spoke on the qubit for last year, I think we did a deeper dive on quantum. You've made some announcements around hardware and software roadmaps. Give us the update on quantum please. >> Well, there is so much that has happened since we last spoke on the quantum landscape. And the key thing that we focused on in the last six months is really an articulation of our roadmaps, so the roadmap around hardware, the roadmap around software, and we've also done quite a bit of ecosystem development. So in terms of the roadmap around hardware, we put ourselves out there we've said we were going to get to over 1000 qubit machine and in 2023, so that's our milestone. And we've got a number of steps we've outlined along that way, of course, we have to make progress, frankly, every six months in terms of innovating around the processor, the electronics and the fridge associated with these machines. So lots of exciting innovation across the board. We've also published a software roadmap, where we're articulating how we improve a circuit execution speeds. So we hope, our plan to show shortly a 100 times improvement in circuit execution speeds. And as we go forward in the future, we're modifying our Qiskit programming model to not only allow a easily easy use by all types of developers, but to improve the fidelity of the entire machine, if you will. So all of our innovations go hand in hand, our hardware roadmap, our software roadmap, are all very critical in driving the technical outcomes that we think are so important for quantum to become a reality. We've deployed, I would say, in our quantum cloud over, you know, over 20 machines over time, we never quite identify the precise number because frankly, as we put up a new generation machine, we often retire when it's older. So we're constantly updating them out there, and every machine that comes on online, and that cloud, in fact, represents a sea change and hardware and a sea change in software. So they're all the latest and greatest that our clients can have access to. >> That's key, the developer angle you got redshift running on quantum yet? >> Okay, I mean, that's a really good question, you know, as part of that software roadmap in terms of the evolution and the speed of that circuit execution is really this interesting marriage between classical processing and quantum processing and bring those closer together. And in the context of our classical operations that are interfacing with that quantum processor, we're taking advantage of OpenShift, running on that classical machine to achieve that. And once again, if, as you can imagine, that'll give us a lot of flexibility in terms of where that classical machine resides and how we continue the evolution the great marriage, I think that's going to that will exist that does exist and will exist between classical computing and quantum computing. >> I'm glad I asked it was kind of tongue in cheek. But that's a key thread to the ecosystem, which is critical to obviously, you know, such a new technology. How are you thinking about the ecosystem evolution? >> Well, the ecosystem here for quantum is infinitely important. We started day one, on this journey with free access to our systems for that reason, because we wanted to create easy entry for anyone that really wanted to participate in this quantum journey. And I can tell you, it really fascinates everyone, from high school students, to college students, to those that are PhDs. But during this journey, we have reached over 300,000 unique users, we have now over 500,000 unique downloads of our Qiskit programming model. But to really achieve that is his back plane by this ongoing educational thrust that we have. So we've created an open source textbook, around Qiskit that allows organizations around the world to take advantage of it from a curriculum perspective. We have over 200 organizations that are using our open source textbook. Last year, when we realized we couldn't do our in person programming camps, which were so exciting around the world, you can imagine doing an in person programming camp and South Africa and Asia and all those things we did in 2019. Well, we had just like you all, we had to go completely virtual, right. And we thought that we would have a few 100 people sign up for our summer school, we had over 4000 people sign up for our summer school. And so one of the things we had to do is really pedal fast to be able to support that many students in this summer school that kind of grew out of our proportions. The neat thing was once again, seeing all the kids and students around the world taking advantage of this and learning about quantum computing. And then I guess that the end of last year, Dave, to really top this off, we did something really fundamentally important. And we set up a quantum center for historically black colleges and universities, with Howard University being the anchor of this quantum center. And we're serving 23 HBCUs now, to be able to reach a new set of students, if you will, with STEM technologies, and most importantly, with quantum. And I find, you know, the neat thing about quantum is is very interdisciplinary. So we have quantum physicist, we have electrical engineers, we have engineers on the team, we have computer scientists, we have people with biology and chemistry and financial services backgrounds. So I'm pretty excited about the reach that we have with quantum into HBCUs and even beyond right I think we can do some we can have some phenomenal results and help a lot of people on this journey to quantum and you know, obviously help ourselves but help these students as well. >> What do you see in people do with quantum and maybe some of the use cases. I mean you mentioned there's sort of a connection to traditional workloads, but obviously some new territory what's exciting out there? >> Well, there's been a really a number of use cases that I think are top of mind right now. So one of the most interesting to me has been one that showed us a few months ago that we talked about in the press actually a few months ago, which is with Exxon Mobil. And they really started looking at logistics in the context of Maritime shipping, using quantum. And if you think of logistics, logistics are really, really complicated. Logistics in the face of a pandemic are even more complicated and logistics when things like the Suez Canal shuts down, are even more complicated. So think about, you know, when the Suez Canal shut down, it's kind of like the equivalent of several major airports around the world shutting down and then you have to reroute all the traffic, and that traffic and maritime shipping is has to be very precise, has to be planned the stops are plan, the routes are plan. And the interest that ExxonMobil has had in this journey is not just more effective logistics, but how do they get natural gas shipped around the world more effectively, because their goal is to bring energy to organizations into countries while reducing CO2 emissions. So they have a very grand vision that they're trying to accomplish. And this logistics operation is just one of many, then we can think of logistics, though being a being applicable to anyone that has a supply chain. So to other shipping organizations, not just Maritime shipping. And a lot of the optimization logic that we're learning from that set of work also applies to financial services. So if we look at optimization, around portfolio pricing, and everything, a lot of the similar characteristics will also go be applicable to the financial services industry. So that's one big example. And I guess our latest partnership that we announced with some fanfare, about two weeks ago, was with the Cleveland Clinic, and we're doing a special discovery acceleration activity with the Cleveland Clinic, which starts prominently with artificial intelligence, looking at chemistry and genomics, and improve speed around machine learning for all of the the critical healthcare operations that the Cleveland Clinic has embarked on but as part of that journey, they like many clients are evolving from artificial intelligence, and then learning how they can apply quantum as an accelerator in the future. And so they also indicated that they will buy the first commercial on premise quantum computer for their operations and place that in Ohio, in the the the years to come. So it's a pretty exciting relationship. These relationships show the power of the combination, once again, of classical computing, using that intelligently to solve very difficult problems. And then taking advantage of quantum for what it can uniquely do in a lot of these use cases. >> That's great description, because it is a strong connection to things that we do today. It's just going to do them better, but then it's going to open up a whole new set of opportunities. Everybody wants to know, when, you know, it's all over the place. Because some people say, oh, not for decades, other people say I think it's going to be sooner than you think. What are you guys saying about timeframe? >> We're certainly determined to make it sooner than later. Our roadmaps if you note go through 2023. And we think the 2023 is going to will be a pivotal year for us in terms of delivery around those roadmaps. But it's these kind of use cases and this intense working with these clients, 'cause when they work with us, they're giving us feedback on everything that we've done, how does this programming model really help me solve these problems? What do we need to do differently? In the case of Exxon Mobil, they've given us a lot of really great feedback on how we can better fine tune all elements of the system to improve that system. It's really allowed us to chart a course for how we think about the programming model in particular in the context of users. Just last week, in fact, we announced some new machine learning applications, which these applications are really to allow artificial intelligence users and programmers to get take advantage of quantum without being a quantum physicist or expert, right. So it's really an encapsulation of a composable elements so that they can start to use, using an interface allows them to access through PyTorch into the quantum computer, take advantage of some of the things we're doing around neural networks and things like that, once again, without having to be experts in quantum. So I think those are the kind of things we're learning how to do better, fundamentally through this co-creation and development with our quantum network. And our quantum network now is over 140 unique organizations and those are commercial, academic, national laboratories and startups that we're working with. >> The picture started become more clear, we're seeing emerging AI applications, a lot of work today in AI is in modeling. Over time, it's going to shift toward inference and real time and practical applications. Everybody talks about Moore's law being dead. Well, in fact, the yes, I guess, technically speaking, but the premise or the outcome of Moore's law is actually accelerating, we're seeing processor performance, quadrupling every two years now, when you include the GPU along with the CPU, the DSPs, the accelerators. And so that's going to take us through this decade, and then then quantum is going to power us, you know, well beyond who can even predict that. It's a very, very exciting time. Jamie, I always love talking to you. Thank you so much for coming back on the CUBE. >> Well, I appreciate the time. And I think you're exactly right, Dave, you know, we talked about POWER10, just for a few minutes there. But one of the things we've done in POWER10, as well as we've embedded AI into every core that processor, so you reduce that latency, we've got a 10 to 20 times improvement over the last generation in terms of artificial intelligence, you think about the evolution of a classical machine like that state of the art, and then combine that with quantum and what we can do in the future, I think is a really exciting time to be in computing. And I really appreciate your time today to have this dialogue with you. >> Yeah, it's always fun and it's of national importance as well. Jamie Thomas, thanks so much. This is Dave Vellante with the CUBE keep it right there our continuous coverage of IBM Think 2021 will be right back. (gentle music) (bright music)

Published Date : Apr 16 2021

SUMMARY :

it's the CUBE with digital of the people, processes and technologies the CUBE is always a pleasure. and how it plays into that strategy? and the transactions associated with it. and talking about the that we have created is of the key announcements And the key thing that we And in the context of the ecosystem evolution? And so one of the things we and maybe some of the use cases. And a lot of the optimization to things that we do today. of the things we're doing going to power us, you know, like that state of the art, and it's of national importance as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Cleveland ClinicORGANIZATION

0.99+

Pat KessingerPERSON

0.99+

Jamie ThomasPERSON

0.99+

SamsungORGANIZATION

0.99+

JamiePERSON

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

10QUANTITY

0.99+

Exxon MobilORGANIZATION

0.99+

Jamie ThomasPERSON

0.99+

ExxonMobilORGANIZATION

0.99+

100 timesQUANTITY

0.99+

2019DATE

0.99+

OhioLOCATION

0.99+

AsiaLOCATION

0.99+

Last yearDATE

0.99+

2023DATE

0.99+

last weekDATE

0.99+

Howard UniversityORGANIZATION

0.99+

last yearDATE

0.99+

South AfricaLOCATION

0.99+

23 HBCUsQUANTITY

0.99+

over 200 organizationsQUANTITY

0.99+

Suez CanalLOCATION

0.99+

MoorePERSON

0.99+

ArvindPERSON

0.99+

Z LinuxTITLE

0.98+

over 300,000 unique usersQUANTITY

0.98+

todayDATE

0.98+

QiskitTITLE

0.98+

over 4000 peopleQUANTITY

0.98+

LinuxTITLE

0.98+

IntelORGANIZATION

0.98+

first deliveryQUANTITY

0.98+

OpenShiftTITLE

0.98+

over 140 unique organizationsQUANTITY

0.97+

over 20 machinesQUANTITY

0.97+

oneQUANTITY

0.97+

20 timesQUANTITY

0.97+

FirstQUANTITY

0.97+

seven nanometerQUANTITY

0.96+

Think 2021COMMERCIAL_ITEM

0.96+

pandemicEVENT

0.96+

18 billion transistorsQUANTITY

0.95+

100 peopleQUANTITY

0.94+

day oneQUANTITY

0.94+

over 1000 qubitQUANTITY

0.93+

over 500,000 unique downloadsQUANTITY

0.93+

one big exampleQUANTITY

0.93+

PyTorchTITLE

0.92+

z/OSTITLE

0.9+

endDATE

0.9+

Think 2021COMMERCIAL_ITEM

0.87+

Red Hat OpenShiftTITLE

0.87+

about two weeks agoDATE

0.86+

first commercialQUANTITY

0.86+

few weeks agoDATE

0.85+

POWER10TITLE

0.84+

AIXTITLE

0.83+

a few months agoDATE

0.83+

few months agoDATE

0.81+

BOS19COMMERCIAL_ITEM

0.79+

last six monthsDATE

0.78+

IBM20 KC Choi VCUBE


 

>>from around >>The globe. It's the cube with digital coverage of IBM. Think 2021 brought to you by IBM Hello and welcome back everyone to the cubes coverage of IBM Think 2021 virtual. I'm john for your host of the cube. I'm excited to have this next guest cube alumni Casey choi corporate E V P. Executive vice president and general manager at Samsung Mobile, the B to B and B to G team Casey, great to see you how you been >>john it is wonderful to see you and it's been way too long. Great to be back on the cube with you. Looking forward to our conversation and hope you're safe >>and same to you. Great to see you. I'm so excited. One of the things I've really admired about you and our conversations in the past as you've always had your finger on the pulse of the waves and you've always involved with some really great engineering work and I want to dig into this now because um your role is really hitting the industry four dot oh kind of wave, which is the confluence of tech, media, entertainment, every vertical big data IOT and the the with the distributed computing now called the cloud and edge. It really sets the table for what is now going to be the preferred architecture probably for the next 20 plus years. So give us your view on how you see the the changing landscape in the industry. >>Yeah, I think I think you you covered you know, all of the major seismic shifts that are happening here and then, you know, as we've all experienced over the last, you know, over a year with the covid pandemic, that's actually accelerated a lot of the thinking around the edge. We've certainly seen use cases proliferate whether it be in things such as health care, Manufacturing is also taken. I think a real hard look at the applicability of these types of solutions. Uh we've seen things like for example 5G pick up in these sort of industrial applications as um you know as the industrial companies have thought about worker safety as they thought about automation as they thought about, you know, utilize being more protocols as well as you know, bringing these technologies and processes together in a way that will help to kind of reinvent their their particular economic base as well as kind of the learnings that we've seen over the last year coming from these new uh safety protocols as well as the need for now with the economy is picking back up the need for productivity as well as you know, greater efficiencies coming from these types of solutions. So we've seen that confluence happened and then certainly on our end as our network connectivity has become much stronger, lower latency as well as the endpoint capabilities have increased dramatically over the last few years, as S O C. S and others have taken root. We've seen the edge, if you will start to be more extreme in the sense that it's pushing further and further out beyond what we originally envisioned the edge to be. >>And the S O C trend actually highlights that it's not so much about moore's law as it is more about more chips, more more performance if you look at actual performance, David and they just put out a report on this where there's much more performance now than ever before coming in from the combined energy. So uh and combined processing power out there. So it's super, super amazing what you can do at the edge. Before we get into the edge. I want to just Clarify, what is your new role there? I mean Samsung is known for, I'll see the B2C with the phones and everything else, but you have a specific focus uh what is your main focus there? >>Yeah, our missions pretty straightforward and as everyone knows, you know, Samsung is this uh you know, powerhouse uh consumer electronics company we pride ourselves in and obviously uh our our position in that, but um we also have a very significant role really in the business to business and in the government and financial services sector space uh with our mobile devices as well as with our knock security platform solution and device management platform. We actually provide a large portion of the secure devices for governments worldwide, as well as the Knox platform that is built into the majority of our both consumer as well as business devices uh really allows for uh that uh if you will that next protective layer on top of the android. Os that allows for things such as personal and professional profile. So we produce those solutions out of my team um as well as we provide really the the go to market support as well as the R and D support for that platform, including uh an area that's growing rapidly for us, which is in the rugged category, which is, you know, one of the key products that we're using for some of these edge applications that will be talking about. >>Great, let's jump into that. What are you guys doing specifically on the edge computing space? Let's dig into it. >>Yeah, I think, you know, maybe the place to start on that is uh we're really kind of re envisioning what the edges and uh I mentioned a little earlier that uh with what's occurring in the performance profile and really the functional profile, what is being produced at the device level, You know, we're talking about in the last few years, the fidelity and the capabilities are, you know, in, you know, what I would call the the computer class type uh, functions as well as obviously mobile devices have always been um, communication gateways for a number of functions, whether they be, you know, videos or photos, their multi sensory in nature. And as this has become more practical and the connective tissue has gotten there with five G as well as all kinds of other, you know, fast, low latency communications capabilities and wifi six U w b, you know, included within that. What we're finding is that the use case to bring applications, especially cloud, native and container native applications uh, to these devices to be, you know, augmenting the the endpoint user, the frontline worker, uh really the Knowledge Worker and moving that capability further away from if you will and an extension to cloud services as well as the M E C type services. This is where we see it going and really what we're trying to to work on with IBM and with red hat is how do we, you know, continue to fortify this, not only from a actual processing ai Ml capability, but also equipped these devices so that they can fully participate as part of a multi hybrid cloud architecture. Uh the endpoint is really one of the last baskets where we have not uh kind of conquered bringing uh, you know, cloud first container native applications really to that point and we believe the time is right because of the capabilities that are there along with again, uh the connectivity that is becoming much more ubiquitous now to allow for that type of architecture to exist. And uh, we're starting to call this the intelligent human edge as well. We think that the applications that will see for this are you know, ones that will uh, you know, make the, the human operator more productive, safer, uh certainly more efficient and uh we think that this augmentation of that front line workers is an area that we, we are, you know, put put our, our steaks on in terms of pioneering just because of our experience in that mobility space and in the consumer space. >>That's great. You brought up red hat and IBM obviously red hat was bought by IBM Arvin Arvin Ceo. Well I interviewed in 2019 and the cube that red hat summit, ironically a couple months later by the company just smile on his face. He likes clowns. >>You had something to do with that. You know, >>he wanted to, I could see he wanted to say it, but but he loves the cloud. Everyone who knows Arvin knows that he's into the cloud in a new way in this edge piece that you mentioned that you're using red hat and IBM for hybrid. This is what the new operating system is going to look like. It's a completely distributed system and the edge is just part of that operating model. This is what their vision is, which I love by the way, I think that redefines what that is. Are you saying that you guys are working with red hat and IBM for that hybrid edge piece. How does that work? Can you take me through that? >>Yeah, that's exactly right. I mean this is a obviously the ecosystems bigger than that, but IBM and red Hat really bring the expertise really around uh container ecosystems, certainly the work that they have done in terms of multi hybrid cloud, uh certainly the work that open ship has brought forward in terms of, you know, multi platform capability. We really love the concept of developed once run any sort of a construct. And uh when you think about it, the mobile platforms specifically, you know, ours as well as others has really been that last bastion of, of areas where more of the development is on a particular platform, it's more bespoke. We think that by broaching this uh, you know, in conjunction with IBM and Red Hat, um this is going to give us the ability to have these device architecture has become a full voting member if you will of of that hybrid cloud architecture and of that microservices can contain architecture that is becoming much more prevalent. So this is really the work that we're doing. And then obviously we're working at a vertical level to see where are the applicable use cases in places such as the design studio we have in Singapore, where with the Singaporean government, we're looking at really bringing a renaissance to industry ford auto type application, smart factory automation, public safety. These areas where we believe that this type of architecture can be, can be deployed. >>That's awesome. And totally believe that the edge um it's still gonna be pushed further and further out, honestly having that open, open standards of of hybrid. So I gotta ask you on the edge just well I got you here, you know, one of the things that you see clearly as the industrial edge, it's called factories and whatnot. You mentioned some of those and then you got the human piece, which is like people have phones and wearables and other things are gonna be happening. So as you start to have those endpoints which are then gonna be connected into a distributed network, take a hybrid cloud, so to be multiple clouds. But yeah, that's the subsystem within the cloud construct. The complaint has been not complaint, but the observation has been and complain if you look at it that the edges limited by power and connectivity. Okay. These are like key basic concepts, How is the connectivity option? I know five Gs coming, it's here, we're seeing it being deployed, we got people saying, hey, this is our business application, clearly got higher throughput, not as much range, give us your take on this because this becomes important. I'll see powers battery driven, getting better and better and and power is getting uh is not really that much of a problem, but connectivity seems to be what's your vision of this? >>Yeah, and you know, there's a lot of ways to approach that, I will tell you on the industrial side, at least in some of the deployments and pOC is that we've been involved in over the last year to two years, um connectivity is an issue uh and a lot of it has to do with the infrastructure that is available in many of these uh you know, plants or factories or you know, points of distribution. Uh they're not necessarily, you know, leading edge in many cases we're dealing with uh you know what I would call subpar connectivity, it's not like an office complex where You may have, you know, kind of state of the art wifi capability or you know, 10 gig capability or whatever it might be. Um So what we've, what we've found on that is it requires actually quite a bit of work in terms of fine tuning both on the network infrastructure side, whatever that might be. Uh Or we've also found that on the device side, the program ability of the of the device in terms of tuning it for whatever connective environment would be there. And we worked with everything from, you know, bluetooth, you w b uh to wifi six and everything in between and in many cases they're multiple uh you know, protocols or connectivity methods that are there. So, you know, one thing we've learned is that um you can't you can't necessarily assume that in a especially in a factory environment that those conditions are going to allow for um uh you know, consistency, so you have to engineer around that, you know, and some of the things that we've done are really around making sure that we've got uh, you know, deployable program ability at the device as well as, you know, uh more dynamic network tuning capabilities that will allow for, you know, better connectivity and handle things such as consistency. >>All right, Casey, Great to incite final question for you why Samsung and IBM, what's the bottom line? >>Yeah, I think the bottom line is really straightforward. I mean we've had a, you know, 30 year history of working together, uh you know, we've been mutual customers to each other. We do a lot of work for IBM in regards to foundry type services and semiconductor services and then we work very closely with them over many years on applications. So number one, there's been a natural relationship just in the the the services that we provided to each other. But as as we look at really to go to market, I mean, IBM brings so much credibility from a vertical market perspective. Um there's a trusted advisor type status that I think is is very profound and it's been built over many years, you know, delivering on the promises and on our end. I think what we bring is really this uh this uh cycle time that is driven by our passion in the consumer space. And when we start to apply that into more of these vertical industrial, uh you know, vertical sectors, I think that combination is very powerful. Um the services piece obviously comes into play with IBM and then really the red hat piece of this really just puts the icing on the cake with really the market leadership in uh you know, hybrid cloud and in the container native architecture. So it's just a very powerful combo. And um you know, the cooperation there has been strong and we continue to look forward to delivering more through that partnership. >>Casey great to see a great, great thing to hear. You know, you got scalable infrastructure, you get modern applications at the edge, all of hybrid. Great, great partnership. Casey Choi Executive Vice Corporate Executive Vice President and General Manager of Samsung Mobile B two B team. Great to see you and congratulations on your mission. It's exciting project. Thanks for coming on the cube and sharing. >>Great to see you, jOHn take care of yourself and looking forward to seeing you again. >>Okay, this is the cubes coverage. IBM think 2021. I'm john for your host of the cube. Thanks for watching.

Published Date : Apr 15 2021

SUMMARY :

team Casey, great to see you how you been john it is wonderful to see you and it's been way too long. One of the things I've really admired about you and our conversations in the past protocols as well as you know, bringing these technologies and processes together in a way that I'll see the B2C with the phones and everything else, but you have a specific focus uh what is you know, one of the key products that we're using for some of these edge applications that will What are you guys doing specifically on the edge computing space? Yeah, I think, you know, maybe the place to start on that is uh we're really kind Well I interviewed in 2019 and the cube that red hat summit, ironically a couple You had something to do with that. knows that he's into the cloud in a new way in this edge piece that you mentioned that you're using uh certainly the work that open ship has brought forward in terms of, you know, So I gotta ask you on the edge just well I got you here, you know, one of the things that of these uh you know, plants or factories or you know, leadership in uh you know, hybrid cloud and in the container native architecture. Great to see you and congratulations on your mission. I'm john for your host of the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

IBMORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

SingaporeLOCATION

0.99+

Casey ChoiPERSON

0.99+

ArvinPERSON

0.99+

30 yearQUANTITY

0.99+

Casey choiPERSON

0.99+

Red HatORGANIZATION

0.99+

CaseyPERSON

0.99+

10 gigQUANTITY

0.99+

2019DATE

0.99+

androidTITLE

0.99+

two yearsQUANTITY

0.99+

Samsung MobileORGANIZATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.98+

Think 2021COMMERCIAL_ITEM

0.98+

last yearDATE

0.97+

2021DATE

0.97+

over a yearQUANTITY

0.97+

covid pandemicEVENT

0.96+

johnPERSON

0.96+

KnoxTITLE

0.95+

red HatORGANIZATION

0.95+

Samsung Mobile BORGANIZATION

0.95+

a couple months laterDATE

0.93+

red hatORGANIZATION

0.92+

OneQUANTITY

0.92+

firstQUANTITY

0.92+

OsTITLE

0.89+

lastDATE

0.89+

next 20 plus yearsDATE

0.88+

ExecutivePERSON

0.86+

S O C.TITLE

0.83+

Arvin CeoPERSON

0.74+

Singaporean governmentORGANIZATION

0.73+

waveEVENT

0.73+

yearsDATE

0.72+

red hatCOMMERCIAL_ITEM

0.65+

E V P.ORGANIZATION

0.63+

tureORGANIZATION

0.62+

Executive vice presidentPERSON

0.58+

ChoiPERSON

0.56+

red hatEVENT

0.55+

five GsCOMMERCIAL_ITEM

0.54+

jOHnPERSON

0.51+

four dot ohEVENT

0.5+

redCOMMERCIAL_ITEM

0.5+

thingsQUANTITY

0.49+

Executive Vice PresidentPERSON

0.48+

yearsQUANTITY

0.46+

twoQUANTITY

0.43+

hatORGANIZATION

0.42+

KCPERSON

0.42+

hatTITLE

0.38+

Maribel Lopez & Zeus Kerravala | theCUBE on Cloud 2021


 

>>from around the globe. It's the Cube presenting Cuban cloud brought >>to you by silicon angle. Okay, we're back. Here. Live Cuban Cloud. And this is Dave. Want with my co host, John Ferrier Were all remote. We're getting into the analyst power half hour. Really pleased to have Maribel Lopez here. She's the principal and founder of Lopez Research and Zias Caraballo, who is the principal and founder of ZK research. Guys, great to see you. Let's get into it. How you doing? >>Great. How you been? Good, >>thanks. Really good. John's hanging in there quarantining and, uh, all healthy, So I hope you guys are too. Hey, Mary, But let's start with you. You know, here we are on 2021 you know, just exited one of the strangest years, if not the strangest year of our lives. But looking back in the past decade of cloud and we're looking forward. How do you see that? Where do we come from? Where we at and where we going >>When we obviously started with the whole let's build a public cloud and everything was about public cloud. Uh, then we went thio the notion of private cloud than we had hybrid cloud and multi cloud. So we've done a lot of different clouds right now. And I think where we are today is that there's a healthy recognition on the cloud computing providers that you need to give it to the customers the way they want it, not the way you've decided to build it. So how do you meet them where they are so that they can have a cloud like experience wherever they want their data to be? >>Yes and yes, you've, you know, observed, This is well, in the early days of cloud, you heard a lot of rhetoric. It was private cloud And and then now we're, you know, hearing a lot of multi cloud and so forth. But initially, a lot of the traditional vendors kind of pooh poohed it. They called us analysts. We said we were all cloud crazy, but they seem to have got their religion. >>Well, everything. Everyone's got a definition of cloud, but I actually think we are right in the midst of another transformation of clouds Miracle talked about. We went from, you know, private clouds, which is really hosting the public cloud to multi cloud hybrid cloud. And if you look at the last post that put on Silicon Angle, which was talking about five acquisition of Volterra, I actually think we're in the midst of the transition to what's called distributed Club, where if you look at modernized cloud apps today, they're actually made up of services from different clouds on also distributed edge locations. And that's gonna have a pretty profound impact on the way we build out, because those distributed edges be a telco edge, cellular vagina. Th whatever the services that lived there are much more ephemeral in nature, right? So the way we secure the way we connect changes quite a bit. But I think that the great thing about Cloud is we've seen several several evolutionary changes. So what the definition is and we're going through that now, which is which is pretty cool to think about, right? It's not a static thing. Um, it's, uh, you know, it's a it's an ongoing transition. But I think, uh, you know, we're moving into this distributed Cloudera, which to me is a lot more complex than what we're dealing with in the Palace. >>I'm actually pretty excited about that because I think that this move toe edge and the distribution that you've talked about, it's like we now have processing everywhere. We've got it on devices, we've got it in, cars were moving, the data centers closer and closer to where the action's happening. And I think that's gonna be a huge trend for 2021. Is that distributed that you were talking about a lot of edge discussion? You >>know what? The >>reason we're doing This, too, is we want. It's not just we're moving the data closer to the user, right? And some. If you think you brought up the autonomous vehicle right in the car being an edge, you think of the data that generates right? There's some things such as the decision to stop or not right that should be done in car. I don't wanna transport that data all the way back to Google him back to decide whether I want to stop. You could also use the same data determine whether drivers driving safely for insurance purposes, right? So the same data give me located at the edge or in a centralized cloud for different purposes, and I think that's what you know, kind of cool about this is we're being able to use our data and much different ways. Now. >>You know, it's interesting is it's so complex. It's mind blowing because this is distributed computing. Everyone kind of agrees this is where it is. But if you think about the complexity and I want to get your guys reaction to this because you know some of the like side fringe trend discussions are data sovereignty, misinformation as a vulnerability. Okay, you get the chips now you got gravitas on with Amazon in front. Apple's got their own chips. Intel is gonna do a whole new direction. So you've got tons of computer. And then you mentioned the ephemeral nature. How do you manage those? What's the observe ability look like? They're what's the trust equation? So all these things kind of play into it. It sounds almost mind blowing, just even thinking about it. But how do you guys, this analyst tryto understand where someone's either blowing bullshit or kind of like has the real deal? Because all those things come into play? I mean, you could have a misinformation campaign targeting the car. Let's say Hey, you know that that data is needs to be. This is this is misinformation who's a >>in a lot of ways, this creates almost unprecedented opportunity now for for starts and for companies to transform right. The fundamental tenet of my research has always been share shifts happen when markets transition and we're in the middle of the big one. If the computer resource is we're using, John and the application resource will be using or ephemeral nature than all the things that surrounded the way we secured the way we connect. Those also have to be equal, equally agile, right, So you can't have, you know, you think of a micro services based application being secured with traditional firewalls, right? Just the amount of, or even virtual the way that the length of time it takes to spend those things up is way too long. So in many ways, this distributed cloud change changes everything in I T. And that that includes all of the services in the the infrastructure that we used to secure and connect. And that's a that is a profound change, and you mentioned the observe ability. You're right. That's another thing that the traditional observe ability tools are based on static maps and things and, you know, traditional up, down and we don't. Things go up and down so quickly now that that that those don't make any sense. So I think we are going to see quite a rise in different types of management tools and the way they look at things to be much more. I suppose you know Angela also So we can measure things that currently aren't measurable. >>So you're talking about the entire stack. Really? Changing is really what you're inferring anyway from your commentary. And that would include the programming model as well, wouldn't it? >>Absolutely. Yeah. You know, the thing that is really interesting about where we have been versus where we're going is we spent a lot of time talking about virtual izing hardware and moving that around. And what does that look like? And that, and creating that is more of a software paradigm. And the thing we're talking about now is what is cloud is an operating model look like? What is the manageability of that? What is the security of that? What? You know, we've talked a lot about containers and moving into a different you know, Dev suck ups and all those different trends that we've been talking about, like now we're doing them. So we've only got into the first crank of that. And I think every technology vendor we talked to now has to address how are they going to do a highly distributed management and security landscape? Like, what are they gonna layer on top of that? Because it's not just about Oh, I've taken Iraq of something server storage, compute and virtualized it. I now have to create a new operating model around it. In a way, we're almost redoing what the OS I stack looks like and what the software and solutions are for that. >>So >>it was really Hold on, hold on, hold on their lengthened. Because that side stack that came up earlier today, Mayor. But we're talking about Yeah, we were riffing on the OSC model, but back in the day and we were comparing the S n a definite the, you know, the proprietary protocol stacks that they were out there and someone >>said Amazon's S N a. Is that recall? E think that's what you said? >>No, no. Someone in the chest. That's a comment like Amazon's proprietary meaning, their scale. And I said, Oh, that means there s n a But if you think about it, that's kind of almost that can hang. Hang together. If the kubernetes is like a new connective tissue, is that the TCP pipe moment? Because I think Os I kind of was standardizing at the lower end of the stack Ethernet token ring. You know, the data link layer physical layer and that when you got to the TCP layer and really magic happened right to me, that's when Cisco's happened and everything started happening then and then. It kind of stopped because the application is kinda maintain their peace there. A little history there, but like that's kind of happening now. If you think about it and then you put me a factor in the edge, it just kind of really explodes it. So who's gonna write that software? E >>think you know, Dave, your your dad doesn't change what you build ups. It's already changed in the consumer world, you look atyou, no uber and Waze and things like that. Those absolute already highly decomposed applications that make a P I calls and DNS calls from dozens of different resource is already right. We just haven't really brought that into the enterprise space. There's a number, you know, what kind of you know knew were born in the cloud companies that have that have done that. But they're they're very few and far between today. And John, your point about the connectivity. We do need to think about connectivity at the network layer. Still, obviously, But now we're creating that standardization that standardized connectivity all the way a player seven. So you look at a lot of the, you know, one of the big things that was a PDP. I calls right, you know, from different cloud services. And so we do need to standardize in every layer and then stitch that together. So that does make It does make things a lot more complicated. Now I'm not saying Don't do it because you can do a whole lot more with absolute than you could ever do before. It's just that we kind of cranked up the level of complexity here, and flowered isn't just a single thing anymore, right? That's that. That's what we're talking about here It's a collection of edges and private clouds and public clouds. They all have to be stitched together at every layer in orderto work. >>So I was I was talking a few CEOs earlier in the day. We had we had them on, I was asking them. Okay, So how do you How do you approach this complexity? Do you build that abstraction layer? Do you rely on someone like Microsoft to build that abstraction layer? Doesn't appear that Amazon's gonna do it, you know? Where does that come from? Or is it or is it dozens of abstraction layers? And one of the CEO said, Look, it's on us. We have to figure out, you know, we get this a p I economy, but But you guys were talking about a mawr complicated environment, uh, moving so so fast. Eso if if my enterprise looks like my my iPhone APs. Yes, maybe it's simpler on an individual at basis, but its app creep and my application portfolio grows. Maybe they talk to each other a little bit better. But that level of complexity is something that that that users are gonna have to deal >>with what you thought. So I think quite what Zs was trying to get it and correct me if I'm wrong. Zia's right. We've got to the part where we've broken down what was a traditional application, right? And now we've gotten into a P. I calls, and we have to think about different things. Like we have to think about how we secure those a p I s right. That becomes a new criteria that we're looking at. How do we manage them? How do they have a life cycle? So what was the life cycle of, say, an application is now the life cycle of components and so that's a That's a pretty complex thing. So it's not so much that you're getting app creep, but you're definitely rethinking how you want to design your applications and services and some of those you're gonna do yourself and a lot of them are going to say it's too complicated. I'm just going to go to some kind of SAS cloud offering for that and let it go. But I think that many of the larger companies I speak to are looking for a larger company to help them build some kind of framework to migrate from what they've used with them to what they need tohave going forward. >>Yeah, I think. Where the complexities. John, You asked who who creates the normalization layer? You know, obviously, if you look to the cloud providers A W s does a great job of stitching together all things AWS and Microsoft does a great job of stitching together all things Microsoft right in saying with Google. >>But >>then they don't. But if if I want to do some Microsoft to Amazon or Google Toe Microsoft, you know, connectivity, they don't help so much of that. And that's where the third party vendors that you know aviatrix on the network side will tear of the security side of companies like that. Even Cisco's been doing a lot of work with those companies, and so what we what we don't really have And we probably won't for a while if somebody is gonna stitch everything together at every >>you >>know, at every layer. So Andi and I do think we do get after it. Maribel, I think if you look at the world of consumer APS, we moved to a lot more kind of purpose built almost throwaway apps. They serve a purpose or to use them for a while. Then you stop using them. And in the enterprise space, we really haven't kind of converted to them modeling on the mobile side. But I think that's coming. Well, >>I think with micro APS, right, that that was kind of the issue with micro APS. It's like, Oh, I'm not gonna build a full scale out that's gonna take too long. I'm just gonna create this little workflow, and we're gonna have, like, 200 work flows on someone's phone. And I think we did that. And not everybody did it, though, to your point. So I do think that some people that are a little late to the game might end up in in that app creep. But, hey, listen, this is a fabulous opportunity that just, you know, throw a lot of stuff out and do it differently. What What? I think what I hear people struggling with ah lot is be to get it to work. It typically is something that is more vertically integrated. So are you buying all into a Microsoft all you're buying all into an Amazon and people are starting to get a little fear about doing the full scale buy into any specific platform yet. In absence of that, they can't get anything to work. >>Yeah, So I think again what? What I'm hearing from from practitioners, I'm gonna put a micro serve. And I think I think, uh, Mirabelle, this is what you're implying. I'm gonna put a micro services layer. Oh, my, my. If I can't get rid of them, If I can't get rid of my oracle, you know, workloads. I'm gonna connect them to my modernize them with a layer, and I'm gonna impart build that. I'm gonna, you know, partner to get that done. But that seems to be a a critical path forward. If I don't take that step, gonna be stuck in the path in the past and not be able to move forward. >>Yeah, absolutely. I mean, you do have to bridge to the past. You you aren't gonna throw everything out right away. That's just you can't. You can't drive the bus and take the wheels off that the same time. Maybe one wheel, but not all four of them at the same time. So I think that this this concept of what are the technologies and services that you use to make sure you can keep operational, but that you're not just putting on Lee new workloads into the cloud or new workloads as decomposed APS that you're really starting to think about. What do I want to keep in whatever I want to get rid of many of the companies you speak Thio. They have thousands of applications. So are they going to do this for thousands of applications? Are they gonna take this as an opportunity to streamline? Yeah, >>well, a lot of legacy never goes away, right? And I was how companies make this transition is gonna be interesting because there's no there's no really the fact away I was I was talking to this one company. This is New York Bank, and they've broken their I t division down into modern I t and legacy I t. And so modern. Everything is cloud first. And so imagine me, the CEO of Legacy i e 02 miracles. But what they're doing, if they're driving the old bus >>and >>then they're building a new bus and parallel and eventually, you know, slowly they take seats out of the old bus and they take, you know, the seat and and they eventually start stripping away things. That old bus, >>But >>that old bus is going to keep running for a long time. And so stitching the those different worlds together is where a lot of especially big organizations that really can't commit to everything in the cloud are gonna struggle. But it is a It is a whole new world. And like I said, I think it creates so much opportunity for people. You know, e >>whole bus thing reminds me that movie speed when they drive around 55 miles an hour, just put it out to the airport and just blew up E >>got But you know, we all we all say that things were going to go away. But to Zia's point, you know, nothing goes away. We're still in 2021 talking about mainframes just as an aside, right? So I think we're going to continue tohave some legacy in the network. But the But the issue is ah, lot will change around that, and they're gonna be some people. They're gonna make a lot of money selling little startups that Just do one specific piece of that. You know, we just automation of X. Oh, >>yeah, that's a great vertical thing. This is the This is the distributed network argument, right? If you have a note in the network and you could put a containerized environment around it with some micro services um, connective tissue glue layer, if you will software abstract away some integration points, it's a note on the network. So if in mainframe or whatever, it's just I mean makes the argument right, it's not core. You're not building a platform around the mainframe, but if it's punching out, I bank jobs from IBM kicks or something, you know, whatever, Right? So >>And if those were those workloads probably aren't gonna move anywhere, right, they're not. Is there a point in putting those in the cloud? You could say Just leave them where they are. Put a connection to the past Bridge. >>Remember that bank when you talk about bank guy we interviewed in the off the record after the Cube interviews like, Yeah, I'm still running the mainframe, so I never get rid of. I love it. Run our kicks job. I would never think about moving that thing. >>There was a large, large non US bank who said I buy. I buy the next IBM mainframe sight unseen. Andi, he's got no choice. They just write the check. >>But milliseconds is like millions of dollars of millisecond for him on his back, >>so those aren't going anywhere. But then, but then, but they're not growing right. It's just static. >>No, no, that markets not growing its's, in fact. But you could make a lot of money and monetizing the legacy, right? So there are vendors that will do that. But I do think if you look at the well, we've already seen a pretty big transition here. If you look at the growth in a company like twilio, right, that it obviates the need for a company to rack and stack your own phone system to be able to do, um, you know, calling from mobile lapse or even messaging. Now you just do a P. I calls. Um, you know, it allows in a lot of ways that this new world we live in democratizes development, and so any you know, two people in the garage can start up a company and have a service up and running another time at all, and that creates competitiveness. You know much more competitiveness than we've ever had before, which is good for the entire industry. And, you know, because that keeps the bigger companies on their toes and they're always looking over their shoulder. You know what, the banks you're looking at? The venues and companies like that Brian figure out a way to monetize. So I think what we're, you know well, that old stuff never going away. The new stuff is where the competitive screen competitiveness screen. >>It's interesting. Um IDs Avery. Earlier today, I was talking about no code in loco development, how it's different from the old four g l days where we didn't actually expand the base of developers. Now we are to your point is really is democratizing and, >>well, everybody's a developer. It could be a developer, right? A lot of these tools were written in a way that line of business people create their own APs to point and click interface is, and so the barrier. It reminds me of when, when I started my career, I was a I. I used to code and HTML build websites and then went to five years. People using drag and drop interface is right, so that that kind of job went away because it became so easy to dio. >>Yeah, >>sorry. A >>data e was going to say, I think we're getting to the part. We're just starting to talk about data, right? So, you know, when you think of twilio, that's like a service. It's connecting you to specific data. When you think of Snowflake, you know, there's been all these kinds of companies that have crept up into the landscape to feel like a very specific void. And so now the Now the question is, if it's really all about the data, they're going to be new companies that get built that are just focusing on different aspects of how that data secured, how that data is transferred, how that data. You know what happens to that data, because and and does that shift the balance of power about it being out of like, Oh, I've created these data centers with large recommend stack ums that are virtualized thio. A whole other set of you know this is a big software play. It's all about software. >>Well, we just heard from Jim Octagon e You guys talking earlier about just distributed system. She basically laid down that look. Our data architectures air flawed there monolithic. And data by its very nature is distributed so that she's putting forth the whole new paradigm around distributed decentralized data models, >>which Howie shoe is just talking about. Who's gonna build the visual studio for data, right? So programmatic. Kind of thinking around data >>I didn't >>gathering. We didn't touch on because >>I do think there's >>an opportunity for that for, you know, data governance and data ownership and data transport. But it's also the analytics of it. Most companies don't have the in house, um, you know, data scientists to build on a I algorithms. Right. So you're gonna start seeing, you know, cos pop up to do very specific types of data. I don't know if you saw this morning, um, you know, uniforms bought this company that does, you know, video emotion detection so they could tell on the video whether somebody's paying attention, Not right. And so that's something that it would be eso hard for a company to build that in house. But I think what you're going to see is a rise in these, you know, these types of companies that help with specific types of analytics. And then you drop you pull those in his resource is into your application. And so it's not only the storage and the governance of the data, but also the analytics and the analytics. Frankly, there were a lot of the, uh, differentiation for companies is gonna come from. I know Maribel has written a lot on a I, as have I, and I think that's one of the more exciting areas to look at this year. >>I actually want to rip off your point because I think it's really important because where we left off in 2020 was yes, there was hybrid cloud, but we just started to see the era of the vertical eyes cloud the cloud for something you know, the cloud for finance, the cloud for health care, the telco and edge cloud, right? So when you start doing that, it becomes much more about what is the specialized stream that we're looking at. So what's a specialized analytic stream? What's a specialized security stack stream? Right? So until now, like everything was just trying to get to what I would call horizontal parody where you took the things you had before you replicated them in a new world with, like, some different software, but it was still kind of the same. And now we're saying, OK, let's try Thio. Let's try to move out of everything, just being a generic sort of cloud set of services and being more total cloud services. >>That is the evolution of everything technology, the first movement. Everything doing technology is we try and make the old thing the new thing look like the old thing, right? First PCs was a mainframe emulator. We took our virtual servers and we made them look like physical service, then eventually figure out, Oh, there's a whole bunch of other stuff that I could do then I couldn't do before. And that's the part we're trying to hop into now. Right? Is like, Oh, now that I've gone cloud native, what can I do that I couldn't do before? Right? So we're just we're sort of hitting that inflection point. That's when you're really going to see the growth takeoff. But for whatever reason, and i t. All we ever do is we're trying to replicate the old until we figure out the old didn't really work, and we should do something new. >>Well, let me throw something old and controversial. Controversial old but old old trope out there. Consumerism ation of I t. I mean, if you think about what year was first year you heard that term, was it 15 years ago? 20 years ago. When did that first >>podcast? Yeah, so that was a long time ago >>way. So if you think about it like, it kind of is happening. And what does it mean, right? Come. What does What does that actually mean in today's world Doesn't exist. >>Well, you heard you heard. Like Fred Luddy, whose founder of service now saying that was his dream to bring consumer like experiences to the enterprise will. Well, it didn't really happen. I mean, service not pretty. Pretty complicated compared toa what? We know what we do here, but so it's It's evolving. >>Yeah, I think there's also the enterprise ation of consumer technology that John the companies, you know, you look a zoom. They came to market with a highly consumer facing product, realized it didn't have the security tools, you know, to really be corporate great. And then they had to go invest a bunch of money in that. So, you know, I think that waken swing the pendulum all the way over to the consumer side, but that that kind of failed us, right? So now we're trying to bring it back to center a little bit where we blend the two together. >>Cloud kind of brings that I never looked at that way. That's interesting and surprising of consumer. Yeah, that's >>alright, guys. Hey, we gotta wrap Zs, Maribel. Always a pleasure having you guys on great great insights from the half hour flies by. Thanks so much. We appreciate it. >>Thank >>you guys. >>Alright, keep it right there. Mortgage rate content coming from the Cuban Cloud Day Volonte with John Ferrier and a whole lineup still to come Keep right there.

Published Date : Jan 22 2021

SUMMARY :

It's the Cube presenting Cuban to you by silicon angle. You know, here we are on 2021 you know, just exited one of the strangest years, recognition on the cloud computing providers that you need to give it to the customers the way they want it, It was private cloud And and then now we're, you know, hearing a lot of multi cloud And if you look at the last post that put on Silicon Angle, which was talking about five acquisition of Volterra, Is that distributed that you were talking about and I think that's what you know, kind of cool about this is we're being able to use our data and much different ways. And then you mentioned the ephemeral nature. And that's a that is a profound change, and you mentioned the observe ability. And that would include the programming model as well, And the thing we're talking about now is what is cloud is an operating model look like? and we were comparing the S n a definite the, you know, the proprietary protocol E think that's what you said? And I said, Oh, that means there s n a But if you think about it, that's kind of almost that can hang. think you know, Dave, your your dad doesn't change what you build ups. We have to figure out, you know, we get this a p But I think that many of the larger companies I speak to are looking for You know, obviously, if you look to the cloud providers A W s does a great job of stitching together that you know aviatrix on the network side will tear of the security side of companies like that. Maribel, I think if you look at the world of consumer APS, we moved to a lot more kind of purpose built So are you buying all into a Microsoft all you're buying all into an Amazon and If I don't take that step, gonna be stuck in the path in the past and not be able to move forward. So I think that this this concept of what are the technologies and services that you use And I was how companies make this transition is gonna out of the old bus and they take, you know, the seat and and they eventually start stripping away things. And so stitching the those different worlds together is where a lot got But you know, we all we all say that things were going to go away. I bank jobs from IBM kicks or something, you know, And if those were those workloads probably aren't gonna move anywhere, right, they're not. Remember that bank when you talk about bank guy we interviewed in the off the record after the Cube interviews like, I buy the next IBM mainframe sight unseen. But then, but then, but they're not growing right. But I do think if you look at the well, how it's different from the old four g l days where we didn't actually expand the base of developers. because it became so easy to dio. A So, you know, when you think of twilio, that's like a service. And data by its very nature is distributed so that she's putting forth the whole new paradigm Who's gonna build the visual studio for data, We didn't touch on because an opportunity for that for, you know, data governance and data ownership and data transport. the things you had before you replicated them in a new world with, like, some different software, And that's the part we're trying to hop into now. Consumerism ation of I t. I mean, if you think about what year was first year you heard that So if you think about it like, it kind of is happening. Well, you heard you heard. realized it didn't have the security tools, you know, to really be corporate great. Cloud kind of brings that I never looked at that way. Always a pleasure having you guys Mortgage rate content coming from the Cuban Cloud Day Volonte with John Ferrier and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

John FerrierPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

CiscoORGANIZATION

0.99+

Fred LuddyPERSON

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Maribel LopezPERSON

0.99+

GoogleORGANIZATION

0.99+

AngelaPERSON

0.99+

2021DATE

0.99+

2020DATE

0.99+

thousandsQUANTITY

0.99+

New York BankORGANIZATION

0.99+

VolterraORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Zeus KerravalaPERSON

0.99+

ZK researchORGANIZATION

0.99+

telcoORGANIZATION

0.99+

five yearsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

MirabellePERSON

0.99+

MaribelPERSON

0.99+

one wheelQUANTITY

0.99+

Jim OctagonPERSON

0.99+

AppleORGANIZATION

0.99+

BrianPERSON

0.99+

Lopez ResearchORGANIZATION

0.99+

two peopleQUANTITY

0.99+

millions of dollarsQUANTITY

0.98+

FirstQUANTITY

0.98+

20 years agoDATE

0.98+

oneQUANTITY

0.98+

fiveQUANTITY

0.98+

twoQUANTITY

0.98+

Zias CaraballoPERSON

0.98+

around 55 miles an hourQUANTITY

0.98+

first movementQUANTITY

0.98+

15 years agoDATE

0.97+

first yearQUANTITY

0.97+

first crankQUANTITY

0.97+

todayDATE

0.97+

firstQUANTITY

0.97+

AndiPERSON

0.97+

fourQUANTITY

0.96+

ZiaPERSON

0.96+

this yearDATE

0.96+

CloudTITLE

0.96+

dozensQUANTITY

0.95+

200 work flowsQUANTITY

0.94+

this morningDATE

0.94+

AveryPERSON

0.93+

ThioPERSON

0.92+

uberORGANIZATION

0.92+

IntelORGANIZATION

0.91+

single thingQUANTITY

0.9+

earlier todayDATE

0.9+

USLOCATION

0.89+

Google ToeORGANIZATION

0.89+

one companyQUANTITY

0.88+

tons of computerQUANTITY

0.87+

dozens of abstraction layersQUANTITY

0.87+

Earlier todayDATE

0.86+

IraqLOCATION

0.86+

OsTITLE

0.85+

edgeORGANIZATION

0.84+

Cuban CloudOTHER

0.83+

twilioORGANIZATION

0.83+

Brian Bohan and Chris Wegmann | AWS Executive Summit 2020


 

>> Announcer: From around the globe, it's theCUBE. With digital coverage of AWS reInvent Executive Summit 2020, sponsored by Accenture and AWS. >> Hello and welcome back to theCUBE's coverage of AWS reInvent 2020. This is special programming for the Accenture Executive Summit where all the thought leaders are going to extract the signal from those share with you their perspective of this year's reInvent conference as it respects the customers' digital transformation. Brian Bohan is the director and head of Accenture, AWS Business Group at Amazon web services. Brian, great to see you. And Chris Wegmann is the Accenture Amazon Business Group technology lead at Accenture. Guys this is about technology vision this conversation. Chris, I want to start with you because you're Andy Jackson's keynote. You heard about the strategy of digital transformation, how you got to lean into it. You got to have the guts to go for it and you got to decompose. He went everywhere.(chuckles) So what did you hear? What was striking about the keynote? Because he covered a lot of topics. >> Yeah. It was epic as always from Andy. Lot of topics, a lot to cover in the three hours. There was a couple of things that stood out for me. First of all, hybrid. The concept, the new concept of hybrid and how Andy talked about it, bringing the compute and the power to all parts of an enterprise, whether it be at the edge or are in the big public cloud, whether it be in an Outpost or wherever it'd be, right with containerization now. Being able to do Amazon containerization in my data center and that's awesome. I think that's going to make a big difference. All that being underneath the Amazon console and billing and things like that, which is great. I'll also say the chips, right? I know computer is always something that we always kind of take for granted but I think again, this year, Amazon and Andy really focused on what they're doing with the chips and compute and the compute is still at the heart of everything in cloud. And that continued advancement is making an impact and will make and continue to make a big impact. >> Yeah, I would agree. I think one of the things that really... I mean the container thing was I think really kind of a nuance point. When you've got Deepak Singh on the opening day with Andy Jassy and he runs a container group over there. When we need a small little team, he's on the front stage. That really is the key to the hybrid. I think this showcases this new layer. We're taking advantage of the Graviton2 chips, which I thought was huge. Brian, this is really a key part of the platform change, not change, but the continuation of AWS. Higher level servers, >> Yep. building blocks that provide more capabilities, heavy lifting as they say but the new services that are coming on top really speaks to hybrid and speaks to the edge. >> It does. Yeah. I think like Andy talks about and we talked about we really want to provide choice to our customers, first and foremost. And you can see that in the array of services we have, we can see it in the the hybrid options that Chris talked about. Being able to run your containers through ECS or EKS anywhere. It just get to the customers choice. And one of the things that I'm excited about as you talk about going up the stack and on the edge are things, most certainly Outpost, right? So now Outpost was launched last year but then with the new form factors and then you look at services like Panorama, right? Being able to take computer vision and embed machine learning and computer vision, and do that as a managed capability at the edge for customers. And so we see this across a number of industries. And so what we're really thinking about is customers no longer have to make trade-offs and have to think about those choices, that they can really deploy natively in the cloud and then they can take those capabilities, train those models, and then deploy them where they need to whether that's on premises or at the edge, whether it be in a factory or retail environment. I think we're really well positioned when hopefully next year we start seeing the travel industry rebound and the need more than ever really to kind of rethink about how we kind of monitor and make those environments safe. Having this kind of capability at the edge is really going to help our customers as we come out of this year and hopefully rebound next year. >> Chris, I want to go back to you for a second. It's hard to pick your favorite innovation from the keynote because, Brian, just reminded me of some things I forgot happened. It was like a buffet of innovation. Some keynotes have one or two, there was like 20. You got the industrial piece that was huge. Computer vision, machine learning, that's just a game changer. The connect thing came out of nowhere in my opinion. I mean, it's a call center technology so it's boring as hell, what are you going to do with that?(Brian and Chris chuckle) It turns out it's a game changer. It's not about the calls but the contact and that's distant intermediating in the stack as well. So again, a feature that looks old is actually new and relevant. What was your favorite innovation announcement? >> It's hard to say. I will say my personal favorite was the Mac OS. I think that is a phenomenal just addition, right? And the fact that AWS has worked with Apple to integrate the Nitro chip into the iMac and offer that out. A lot of people are doing development for IOS and that stuff and that's just been a huge benefit for the development teams. But I will say, I'll come back to Connect. You mentioned it but you're right. It's a boring area but it's an area that we've seen huge success with since Connect was launched and the additional features that Amazon continues to bring, obviously with the pandemic and now that customer engagement through the phone, through omni-channel has just been critical for companies, right? And to be able to have those agents at home, working from home versus being in the office, it was a huge advantage for several customers that are using Connect. We did some great stuff with some different customers but the continue technology like you said, the call translation and during a call to be able to pop up those keywords and have a supervisor listen is awesome. And some of that was already being done but we are stitching multiple services together. Now that's right out of the box. And that Google's location is only going to make that go faster and make us to be able to innovate faster for that piece of the business. >> It's interesting not to get all nerdy and business school like but you've got systems of records, systems of engagement. If you look at the call center and the Connect thing, what got my attention was not only the model of disintermediating that part of the engagement in the stack but what actually cloud does to something that's a feature or something that could be an element like say call center, the old days of calling the 800 number and getting some support. You got infra chip, you have machine learning, you actually have stuff in the in the stack that actually makes that different now. The thing that impressed me was Andy was saying, you could have machine learning detect pauses, voice inflections. So now you have technology making that more relevant and better and different. So a lot going on. This is just one example of many things that are happening from a disruption innovation standpoint. What do you guys think about that? Am I getting it right? Can you share other examples? >> I think you are right and I think what's implied there and what you're saying and even in the other Mac OS example is the ability... We're talking about features, right? Which by themselves you're saying, Oh, wow! What's so unique about that? But because it's on AWS and now because whether you're a developer working with Mac iOS and you have access to the 175 plus services that you can then weave into your new application. Talk about the Connect scenario. Now we're embedding that kind of inference and machine learning to do what you say, but then your data Lake is also most likely running in AWS, right? And then the other channels whether they be mobile channels or web channels or in-store physical channels, that data can be captured and that same machine learning could be applied there to get that full picture across the spectrum, right? So that's the power of bringing you together on AWS, the access to all those different capabilities and services and then also where the data is and pulling all that together for that end to end view. >> Can you guys give some examples of work you've done together? I know there's stuff we've reported on, in the last session we talked about some of the connect stuff but that kind of encapsulates where this is all going with respect to the tech. >> Yeah. I think one of them, it was called out on Doug's Partner Summit is a SAP Data Lake Accelerator, right? Almost every enterprise has SAP, right? And getting data out of SAP has always been a challenge, right? Whether it be through data warehouses and AWS, or sorry, SAP BW. What we've focused on is getting that data when you have SAP on AWS, getting that data into the Data Lake, right? Getting it into a model that you can pull the value out and the customers can pull the value out, use those AI models. So that's one thing we worked on in the last 12 months. Super excited about seeing great success with customers. A lot of customers had ideas. They want to do this, they had different models. What we've done is made it very simplified. Framework which allows customers to do it very quickly, get the data out there and start getting value out of it and iterating on that data. We saw customers are spending way too much time trying to stitch it all together and trying to get it to work technically. And we've now cut all of that out and they can immediately start getting down to the data and taking advantage of those different services that are out there by AWS. >> Brian, you want to weigh in as things you see as relevant builds that you guys done together that kind of tease out the future and connect the dots to what's coming? >> I'm going to use a customer example. We worked with, it just came out, with Unilever around their blue air, connected, smart air purifier. And what I think is interesting about that, I think it touches on some of the themes we're talking about as well as some of the themes we talked about in the last session, which is we started that program before the pandemic, but Unilever recognized that they needed to differentiate their product in the marketplace, move to more of a services oriented business which we're seeing as a trend. We enabled this capability. So now it's a smart air purifier that can be remote managed. And now when the pandemic hit, they are in a really good position, obviously, with a very relevant product and capability to be used. And so, that data then as we were talking about is going to reside on the cloud. And so the learning that can now happen about usage and about filter changes, et cetera can find its way back into future iterations of that picked out that product. And I think that's keeping with what Chris is talking about where we might be systems of record like in SAP, how do we bring those in and then start learning from that data so that we can get better on our future iterations? >> Hey, Chris, on the last segment we did on the business mission session, Andy Tay from your team talked about partnerships within a century and working with other folks. I want to take that now on the technical side because one of the things that we heard from Doug's keynote and during the partner day was integrations and data were two big themes. When you're in the cloud technically, the integrations are different. You're going to get unique things in the public cloud that you're just not going to get on-premise access to other cloud native technologies and companies. How do you see the partnering of Accenture with people within your ecosystem and how the data and the integration play together? What's your vision? >> Yeah. I think there's two parts of it. One there's from a commercial standpoint, right? Some marketplace, you heard Dave talk about that in the partner summit, right? That marketplace is now bringing together this ecosystem in a very easy way to consume by the customers and by the users and bringing multiple partners together. And we're working with our ecosystem to put more products out in the marketplace that are integrated together already. I think one from a technical perspective though. If you look at Salesforce, I talked a little earlier about Connect. Another good example technically underneath the covers, how we've integrated Connect and Salesforce, some of it being pre-built by AWS and Salesforce, other things that we've added on top of it, I think are good examples. And I think as these ecosystems these ISVs put their products out there and start exposing more and more APIs on the Amazon platform may opening it up, having those pre-built network connections there between the different VPCs of the different areas within within a customer's network and having them all opened up and connected and having all that networking done underneath the covers. It's one thing to call the APIs, it's one thing to have access to those and that's not a big focus of a lot of ISVs and customers who build those APIs and expose them but having that network infrastructure underneath and being able to stay within the cloud, within AWS to make those connections that pass that data. We always talk about scale, right? It's one thing if I just need to pass like a simple user ID back and forth, right? That's fine. We're not talking massive data sets, whether it be seismic data or whatever it be, passing those large data sets between customers across the Amazon network is going to open up the world. >> Yeah, I see huge possibilities there and love to keep on this story. I think it's going to be important and something to keep track of. I'm sure you guys will be on top of it. One of the things I want to dig into with you guys now is Andy had kind of this philosophical thing in his keynote talk about societal change and how tough the pandemic is. Everything's on full display and this kind of brings out kind of like where we are and the truth. If you look at the truth it's a virtual event. I mean, it's a website and you got some sessions out there, we're doing remote best we can and you've got software and you've got technology and the other concept of a mechanism, it's software, it does something It does a purpose. Accenture, you guys have a concept called Living Systems where growth strategy powered by technology. How do you take the concept of a living organism or a system and replace the mechanism staleness of computing and software? And this is kind of interesting because we're on the cusp of a major inflection point post COVID. I get the digital transformation being slow. That's yes, that's happening. There's other things going on in society. What do you guys think about this Living Systems concept? Yeah. I'll start. I think the living system concept, it started out very much thinking about how do you rapidly change your system, right? And because of cloud, because of DevOps, because of all these software technologies and processes that we've created, that's where it started making it much easier, make it a much faster being able to change rapidly. But you're right. I think if you now bring in more technologies, the AI technology, self-healing technologies. Again, you heard Andy in his keynote talk about the systems and services they're building to detect problems and resolve those problems, right? Obviously automation is a big part of that. Living Systems, being able to bring that all together and to be able to react in real time to either when a customer asks, either through the AI models that have been generated and turning those AI models around much faster and being able to get all the information that came in the last 20 minutes, right? Society is moving fast and changing fast and even in one part of the world, if something in 10 minutes can change. And being able to have systems to react to that, learn from that and be able to pass that on to the next country especially in this world of COVID and things changing very quickly and diagnosis and medical response all that so quickly to be able to react to that and have systems pass that information, learn from that information is going to be critical. >> That's awesome. Brian, one of the things that comes up every year is, oh, the cloud's scalable. This year I think we've talked on theCUBE before, years ago certainly with the Accenture and Amazon. I think it was like three or four years ago. Yeah. The clouds horizontally scalable but vertically specialized at the application layer. But if you look at the Data Lake stuff that you guys have been doing where you have machine learning, the data is horizontally scalable and then you got the specialization in the app changes the whole vertical thing. You don't need to have a whole vertical solution or do you? So, how has this year's cloud news impacted vertical industries? Because it used to be, oh, oil and gas, financial services. They've got a team for that. We got a stack for that. Not anymore. Is it going away? What's changing? >> Well. It's a really good question. I think what we're seeing, and I was just on a call this morning talking about banking and capital markets and I do think the challenges are still pretty sector specific. But what we do see is the kind of commonality when we start looking at the, and we talked about this, the industry solutions that we're building as a partnership, most of them follow the pattern of ingesting data, analyzing that data and then being able to provide insights and then actions, right? So if you think about creating that kind of common chassis of that in just the Data Lake and then the machine learning, and you talk about the nuances around SageMaker and being able to manage these models, what changes then really are the very specific industries' algorithms that you're writing, right, within that framework. And so, we're doing a lot and Connect is a good example of this too, where you look at it and yeah, customer service is a horizontal capability that we're building out, but then when you stamp it into insurance or retail banking, or utilities, there are nuances then that we then extend and build so that we meet the unique needs of those industries and that's usually around those models. >> Yeah. I think this year was the first reInvent that I saw real products coming out that actually solved that problem. I mean, it was there last year SageMaker was kind of moving up the stack, but now you have apps embedding machine learning directly in and users don't even know it's in there. I mean, cause this is kind of where it's going, right? I mean-- >> You saw that was in announcements, right? How many announcements where machine learning is just embedded in? I mean, CodeGuru, DevOps Guru, the Panorama we talked about, it's just there. >> Yeah. I mean having that knowledge about the linguistics and the metadata, knowing the business logic, those are important specific use cases for the vertical and you can get to it faster. Chris, how is this changing on the tech side, your perspective? >> Yeah. I keep coming back to AWS and cloud makes it easier, right? All this stuff can be done and some of it has been done, but what Amazon continues to do is make it easier to consume by the developer, by the customer and to actually embed it into applications much easier than it would be if I had to go set up the stack and build it all on them and embed it, right? So it's shortcoming that process and again, as these products continue to mature, right, and some of this stuff is embedded, it makes that process so much faster. It reduces the amount of work required by the developers the engineers to get there. So, I'm expecting you're going to see more of this, right. I think you're going to see more and more of these multi connected services by AWS, that has a lot of the AI ML pre-configured Data Lakes, all that kind of stuff embedded in those services. So you don't have to do it yourself and continue to go up the stack. And we always talk about Amazon's built for builders, right? But, builders have been super specialized and are becoming, as engineers were being asked to be bigger and bigger and to be be able to do more stuff and I think these kind of integrated services are going to help us do that >> And certainly needed more now when you have hybrid edge that they're going to be operating with microservices on a cloud model and with all those advantages that are going to come around the corner for being in the cloud. I mean, I think there's going to be a whole clarity around benefits in the cloud with all these capabilities and benefits. Cloud Guru I think it's my favorite this year because it just points to why that could happen. I mean that happens because of the cloud data.(laughs) If you're on-premise, you may not have a little Cloud Guru. you are going to get more data but they're all different. Edge certainly will come in too. Your vision on the edge, Chris, how you see that evolving for customers because that could be complex, new stuff. How is it going to get easier? >> Yeah. It's super complex now, right? I mean, you got to design for all the different edge 5G protocols are out there and solutions, right? Amazon's simplifying that. Again, I come back to simplification, right? I can build an app that works on any 5G network that's been integrated with AWS, right. I don't have to set up all the different layers to get back to my cloud or back to my my bigger data set. And that's kind of choking. I don't even know where to call the cloud anymore. I got big cloud which is a central and I go down then you've got a cloud at the edge. Right? So what do I call that? >> Brian: It's just really computing.(laughing) Exactly. So, again, I think is this next generation of technology with the edge comes right and we put more and more data at the edge. We're asking for more and more compute at the edge, right? Whether it be industrial or for personal use or consumer use, that processing is going to get more and more intense to be able to maintain under a single console, under a single platform and be able to move the code that I developed across that entire platform, whether I have to go all the way down to the very edge at the 5G level, right, or all the way back into the bigger cloud and how that processing in there, being able to do that seamlessly is going to allow the speed of development that's needed. >> Wow. You guys done a great job and no better time to be a techie or interested in technology or computer science or social science for that matter. This is a really perfect store. A lot of problems to solve, a lot of change happening, positive change opportunities, a lot of great stuff. Final question guys. Five years working together now on this partnership with AWS and Accenture. Congratulations, you guys are in pole position for the next wave coming. What's exciting you guys? Chris, what's on your mind? Brian, what's getting you guys pumped up? >> Well, again, I come back to Andy mentioned it in his keynote, right? We're seeing customers move now, right. Five years ago we knew customers were going to do this. We built a partnership to enable these enterprise customers to make that journey, right? But now, even more we're seeing them move at such great speed, right? Which is super excites me, right? Because I can see... Being in this for a long time now, I can see the value on the other end. We've been wanting to push our customers as fast as they can through the journey and now they're moving. Now they're getting the religion, they're getting there. They see they need to do it to change your business so that's what excites me. It just the excites me, it's just the speed at which we're going to to see the movement. >> Yeah. >> Yeah. I'd agree with that. I mean, I just think getting customers to the cloud is super important work and we're obviously doing that and helping accelerate that. It's what we've been talking about when we're there all the possibilities that become available, right? Through the common data capabilities, the access to the 175 somewhat AWS services. I also think and this is kind of permeated through this week at Re:invent is the opportunity, especially in those industries that do have an industrial aspect, a manufacturing aspect, or a really strong physical aspect of bringing together IT and operational technology and the business with all these capabilities and I think edge and pushing machine learning down to the edge and analytics at the edge is really going to help us do that. And so I'm super excited by all that possibility because I feel like we're just scratching the surface there. >> It's a great time to be building out. and this is the time for reconstruction, reinvention. Big theme, so many storylines in the keynote and the events . It's going to keep us busy here at SiliconANGLE on theCUBE for the next year. Gentlemen, thank you for coming on. I really appreciate it. Thanks. >> Thank you. All right. Great conversation. We're getting technical. We're going to go another 30 minutes A lot to talk about. A lot of storylines here at AWS Re:Invent 2020 at the Accenture Executive Summit. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Dec 16 2020

SUMMARY :

Announcer: From around the globe, and you got to decompose. and the compute is still That really is the key to the hybrid. and speaks to the edge. and on the edge are things, back to you for a second. and the additional features of the engagement in the stack and machine learning to do what you say, in the last session we talked about and the customers can pull the value out, and capability to be used. and how the data and the and by the users and bringing and even in one part of the world, and then you got the of that in just the Data Lake and users don't even know it's in there. DevOps Guru, the Panorama we talked about, and the metadata, knowing and to be be able to do more stuff that are going to come around the corner I don't have to set up and be able to move the and no better time to be a techie I can see the value on the other end. and the business with in the keynote and the events . at AWS Re:Invent 2020 at the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Chris WegmannPERSON

0.99+

ChrisPERSON

0.99+

Andy TayPERSON

0.99+

BrianPERSON

0.99+

AndyPERSON

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

Brian BohanPERSON

0.99+

Andy JacksonPERSON

0.99+

DavePERSON

0.99+

UnileverORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

next yearDATE

0.99+

last yearDATE

0.99+

GoogleORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

Five yearsQUANTITY

0.99+

Deepak SinghPERSON

0.99+

IOSTITLE

0.99+

Andy JassyPERSON

0.99+

AppleORGANIZATION

0.99+

iMacCOMMERCIAL_ITEM

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

threeDATE

0.99+

DougPERSON

0.99+

Five years agoDATE

0.99+

two partsQUANTITY

0.99+

AWS Business GroupORGANIZATION

0.98+

This yearDATE

0.98+

10 minutesQUANTITY

0.98+

175 plus servicesQUANTITY

0.98+

Accenture Executive SummitEVENT

0.98+

20QUANTITY

0.98+

four years agoDATE

0.98+

three hoursQUANTITY

0.98+

this yearDATE

0.98+

800OTHER

0.98+

OneQUANTITY

0.98+

Rebecca Weekly, Intel Corporation | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. Welcome back to the Cubes Coverage of 80 Bus Reinvent 2020. This is the Cube virtual. I'm your host, John Ferrier normally were there in person, a lot of great face to face, but not this year with the pandemic. We're doing a lot of remote, and he's got a great great content guest here. Rebecca Weekly, who's the senior director and senior principal engineer at for Intel's hyper scale strategy and execution. Rebecca. Thanks for coming on. A lot of great news going on around Intel on AWS. Thanks for coming on. >>Thanks for having me done. >>So Tell us first, what's your role in Intel? Because obviously compute being reimagined. It's going to the next level, and we're seeing the sea change that with Cove in 19, it's putting a lot of pressure on faster, smaller, cheaper. This is the cadence of Moore's law. This is kind of what we need. More horsepower. This is big theme of the event. What's what's your role in intel? >>Oh, well, my team looks after a joint development for product and service offerings with Intel and A W s. So we've been working with AWS for more than 14 years. Um, various projects collaborations that deliver a steady beat of infrastructure service offerings for cloud applications. So Data Analytics, ai ml high performance computing, Internet of things, you name it. We've had a project or partnership, several in those the main faces on thanks to that relationship. You know, today, customers Committee choose from over 220 different instance types on AWS global footprint. So those feature Intel processors S, P. J s ai accelerators and more, and it's been incredibly rewarding an incredibly rewarding partnership. >>You know, we've been covering Intel since silicon angle in the Cube was formed 10 years ago, and this is what we've been to every reinvent since the first one was kind of a smaller one. Intel's always had a big presence. You've always been a big partner, and we really appreciate the contribution of the industry. Um, you've been there with with Amazon. From the beginning, you've seen it grow. You've seen Amazon Web services become, ah, big important player in the enterprise. What's different this year from your perspective. >>Well, 2020 has been a challenging here for sure. I was deeply moved by the kinds of partnership that we were able to join forces on within telling a W s, uh, to really help those communities across the globe and to address all the different crisis is because it it hasn't just been one. This has been, ah, year of of multiple. Um, sometimes it feels like rolling crisis is So When the pandemic broke out in India in March of this year, there were schools that were forced to close, obviously to slow the spread of the disease. And with very little warning, a bunch of students had to find themselves in remote school out of school. Uh, so the Department of Education in India engaged career launcher, which is a partner program that we also sponsor and partner with, and it really they had to come up with a distance learning solutions very quickly, uh, that, you know, really would provide Children access to quality education while they were remote. For a long as they needed to be so Korean launcher turned to intel and to a W s. We helped design infrastructure solution to meet this challenge and really, you know, within the first, the first week, more than 100 teachers were instructing classes using that online portal, and today it serves more than 165,000 students, and it's going to accommodate more than a million over the fear. Um, to me, that's just a perfect example of how Cove it comes together with technology, Thio rapidly address a major shift in how we're approaching education in the times of the pandemic. Um, we also, you know, saw kind of a climate change set of challenges with the wildfires that occurred this year in 2020. So we worked with a partner, Roman, as well as a partner who is a partner with AWS end until and used the EEC Thio C five instances that have the second Gen Beyond available processors. And we use them to be able to help the Australian researchers who were dealing with that wildfire increase over 60 fold the number of parallel wildfire simulations that they could perform so they could do better forecasting of who needed to leave their homes how they could manage those scenarios. Um, and we also were able toe work with them on a project to actually thwart the extinction of the Tasmanian Devils. Uh, in also in Australia. So again, that was, you know, an HPC application. And basically, by moving that to the AWS cloud and leveraging those e c two instances, we were able to take their analysis time from 10 days to six hours. And that's the kind of thing that makes the cloud amazing, right? We work on technology. We hope that we get thio, empower people through that technology. But when you can deploy that technology a cloud scale and watch the world's solve problems faster, that has made, I would say 2020 unique in the positivity, right? >>Yeah. You don't wanna wish this on anyone, but that's a real upside for societal change. I mean, I love your passion on that. I think this is a super important worth calling out that the cloud and the cloud scale With that kind of compute power and differentiation, you gets faster speed to value not just horsepower, but speed to value. This is really important. And it saved lives that changes lives. You know, this is classic change. The world kind of stuff, and it really is on center stage on full display with Cove. I really appreciate, uh, you making that point? It's awesome. Now with that, I gotta ask you, as the strategist for hyper scale intel, um, this is your wheelhouse. You get the fashion for the cloud. What kind of investments are you making at Intel To make more advancements in the clock? You take a minute, Thio, share your vision and what intel is working on? >>Sure. I mean, obviously were known more for our semiconductor set of investments. But there's so much that we actually do kind of across the cloud innovation landscape, both in standards, open standards and bodies to enable people to work together across solutions across the world. But really, I mean, even with what we do with Intel Capital, right, we're investing. We've invested in a bunch of born in the cloud start up, many of whom are on top of AWS infrastructure. Uh, and I have found that to be a great source of insights, partnerships, you know, again how we can move the needle together, Thio go forward. So, in the space of autonomous learning and adopt is one of the start ups we invested in. And they've really worked to use methodologies to improve European Health Co network monitoring. So they were actually getting a ton of false positive running in their previous infrastructure, and they were able to take it down from 50 k False positive the day to 50 using again a I on top of AWS in the public cloud. Um, using obviously and a dog, you know, technology in the space of a I, um we've also seen Capsule eight, which is an amazing company that's enabling enterprisers enterprises to modernize and migrate their workloads without compromising security again, Fully born in the cloud able to run on AWS and help those customers migrate to the public cloud with security, we have found them to be an incredible partner. Um, using simple voice commands on your on your smartphone hypersonic is another one of the companies that we've invested in that lets business decision makers quickly visualized insects insight from their disparate data sources. So really large unstructured data, which is the vast majority of data stored in the world that is exploding. Being able to quickly discern what should we do with this. How should we change something about our company using the power of the public cloud? I'm one of the last ones that I absolutely love to cover kind of the wide scope of the waves. That cloud is changing the innovation landscape, Um, Model mine, which is basically a company that allows people thio take decades of insights out of the mainframe data and do something with it. They actually use Amazon's cloud Service, the cloud storage service. So they were able Teoh Teik again. Mainframe data used that and be able to use Amazon's capabilities. Thio actually create, you know, meaningful insights for business users. So all of those again are really exciting. There's a bunch of information on the Intel sponsor channel with demos and videos with those customer stories and many, many, many more. Using Amazon instances built on Intel technology, >>you know that Amazon has always been in about startup born in the cloud. You mentioned that Intel has always been investing with Intel Capital, um, generations of great investments. Great call out there. Can you tell us more about what, uh, Amazon technology about the new offerings and Amazon has that's built on Intel because, as you mentioned at the top of the interview, there's been a long, long standing partnership since inception, and it continues. Can you take a minute to explain some of the offerings built on the Intel technology that Amazon's offering? >>Well, I've always happened to talk about Amazon offerings on Intel products. That's my day job. You know, really, we've spent a lot of time this year listening to our customer feedback and working with Amazon to make sure that we are delivering instances that are optimized for fastest compute, uh, better virtual memory, greater storage access, and that's really being driven by a couple of very specific workloads. So one of the first that we are introducing here it reinvents is the n five the n instant, and that's really ah, high frequency, high speed, low Leighton see network variants of what was, you know, the traditional Amazon E. C two and five. Um, it's powered by a second Gen Intel scalable processors, The Cascade late processors and really these have the highest all court turbo CPU performance from the on scalable processors in the club, with a frequency up to 4.5 gigahertz. That is really exciting for HPC work clothes, uh, for gaining for financial applications. Simulation modeling applications thes are ones where you know, automation, Um, in the automotive space in the aerospace industries, energy, Telkom, all of them can really benefit from that super low late and see high frequency. So that's really what the M five man is all about, um, on the br to others that we've introduced here today and that they are five beats and that is that can utilize up thio 60 gigabits per second of Amazon elastic block storage and really again that bandwidth and the 260 I ops that it can deliver is great for large relational databases. So the database file systems kind of workload. This is really where we are super excited. And again, this is built on Cascade Lake. The 2nd 10. Yeah, and it takes It takes advantage of many different aspects of how we're optimizing in that processor. So we were excited to partner with customers again using E. B s as well as various other solutions to ensure that data ingestion times for applications are reduced and they can see the delivery to what you were mentioning before right time to results. It's all about time to results on the last one is t three e. N. 33 e n is really the new D three instant. It's again on the Alexa Cascade Lake. We offer those for high density with high density local hard drive storage so very cost optimized but really allowing you to have significantly higher network speed and disk throughput. So very cost optimized for storage applications that seven x more storage capacity, 80% lower costs given terabytes of storage compared to the previous B two instances. So we will really find that that would be ideal for workloads in distributed and clustered file system, Big data and analytics. Of course, you need a lot of capacity on high capacity data lakes. You know, normally you want to optimize a day late for performance, but if you need tons of capacity, you need to walk that line. And I think the three and really will help you do that. And and of course, I would be absolutely remiss to not mention that last month we announced the Amazon Web Services Partnership with us on an Intel select solution, which is the first, you know, cloud Service provider to really launching until select solution there. Um, and it's an HPC space, So this is really about in high performance computing. Developers can spend weeks for months researching, you know, to manage compute storage network software configuration options. It's not a field that has gone fully cloud native by default, and those recipes air still coming together. So this is where the AWS parallel cluster solution using. It's an Intel Select solution for simulation and modeling on top of AWS. We're really excited about how it's going to make it easier for scientists and researchers like the ones I mentioned before, but also I t administrators to deploy and manage and just automatically scale those high performance computing clusters in Aws Cloud. >>Wow, that's a lot. A lot of purpose built e mean, no, you guys were really nailing. I mean, low late and see you got stories, you got density. I mean, these air use cases where there's riel workloads that require that kind of specialty and or e means beyond general purpose. Now, you're kind of the general purpose of the of the use case. This is what cloud does this is amazing. Um, final comments this year. I want to get your thoughts because you mentioned Cloud Service provider. You meant to the select program, which is an elite thing, right? Okay, we're anticipating Mawr Cloud service providers. We're expecting Mawr innovation around chips and silicon and software. This is just getting going. It feels like to me, it's just the pulse is different this year. It's faster. The cadence has changed. As a strategist, What's your final comments? Where is this all going? Because this is pretty different. Its's not what it was pre code, but I feel like this is going to continue transforming and being faster. What's your thoughts? >>Absolutely. I mean, the cloud has been one of the biggest winners in a time of, you know, incredible crisis for our world. I don't think anybody has come out of this time without understanding remote work, you know, uh, remote retail, and certainly a business transformation is inevitable and required thio deliver in a disaster recovery kind of business continuity environment. So the cloud will absolutely continue on continue to grow as we enable more and more people to come to it. Um, I personally, I couldn't be more excited than to be able Thio leverage a long term partnership, incredible strength of that insulin AWS partnership and these partnerships with key customers across the ecosystem. We do so much with SVS Os Vives s eyes MSP, you know, name your favorite flavor of acronym, uh, to help end users experience that digital transformation effectively, whatever it might be. And as we learn, we try and take those learnings into any environment. We don't care where workloads run. We care that they run best on our architecture. Er and that's really what we're designing. Thio. And when we partner between the software, the algorithm on the hardware, that's really where we enable the best and user demand and the end use their time to incite and use your time to market >>best. >>Um, so that's really what I'm most excited about. That's obviously what my team does every day. So that's of course, what I'm gonna be most excited about. Um, but that's certainly that's that's the future that you see. And I think it is a bright and rosy one. Um, you know, I I won't say things I'm not supposed to say, but certainly do be sure to tune into the Cube interview with It's on. And you know, also Chatan, who's the CEO of Havana and obviously shaken, is here at A W s, a Z. They talk about some exciting new projects in the AI face because I think that is when we talk about the software, the algorithms and the hardware coming together, the specialization of compute where it needs to go to help us move forward. But also, the complexity of managing that heterogeneity at scale on what that will take and how much more we need to do is an industry and as partners to make that happen. Um, that is the next five years of managing. You know how we are exploding and specialized hardware. I'm excited about that, >>Rebecca. Thank you for your great insight there and thanks for mentioning the Cube interviews. And we've got some great news coming. We'll be breaking that as it gets announced. The chips in the Havana labs will be great stuff. I wouldn't be remiss if I didn't call out the intel. Um, work hard, play hard philosophy. Amazon has a similar approach. You guys do sponsor the party every year replay party, which is not gonna be this year. So we're gonna miss that. I think they gonna have some goodies, as Andy Jassy says, Plan. But, um, you guys have done a great job with the chips and the performance in the cloud. And and I know you guys have a great partner. Concerts provide a customer in Amazon. It's great showcase. Congratulations. >>Thank you so much. I hope you all enjoy olive reinvents even as you adapt to New time. >>Rebecca Weekly here, senior director and senior principal engineer. Intel's hyper scale strategy and execution here in the queue breaking down the Intel partnership with a W s. Ah, lot of good stuff happening under the covers and compute. I'm John for your host of the Cube. We are the Cube. Virtual Thanks for watching

Published Date : Dec 10 2020

SUMMARY :

It's the Cube with digital coverage It's going to the next level, and we're seeing the sea change that with Cove in 19, ai ml high performance computing, Internet of things, you name it. and this is what we've been to every reinvent since the first one was kind of a smaller one. by the kinds of partnership that we were able to join forces on within telling a W I really appreciate, uh, you making that point? I'm one of the last ones that I absolutely love to cover kind of the wide scope of the waves. about the new offerings and Amazon has that's built on Intel because, as you mentioned at the top of the interview, and researchers like the ones I mentioned before, but also I t administrators to deploy it's just the pulse is different this year. I mean, the cloud has been one of the biggest winners in a time of, that's the future that you see. And and I know you guys have a great partner. I hope you all enjoy olive reinvents even as you adapt to in the queue breaking down the Intel partnership with a W s. Ah, lot of good stuff happening under the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AustraliaLOCATION

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

RebeccaPERSON

0.99+

AWSORGANIZATION

0.99+

John FerrierPERSON

0.99+

80%QUANTITY

0.99+

10 daysQUANTITY

0.99+

IndiaLOCATION

0.99+

more than a millionQUANTITY

0.99+

more than 165,000 studentsQUANTITY

0.99+

European Health CoORGANIZATION

0.99+

Intel CapitalORGANIZATION

0.99+

ChatanPERSON

0.99+

TelkomORGANIZATION

0.99+

firstQUANTITY

0.99+

50 kQUANTITY

0.99+

more than 100 teachersQUANTITY

0.99+

2020DATE

0.99+

threeQUANTITY

0.99+

Department of EducationORGANIZATION

0.99+

five beatsQUANTITY

0.99+

first oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

six hoursQUANTITY

0.99+

todayDATE

0.99+

more than 14 yearsQUANTITY

0.99+

JohnPERSON

0.98+

last monthDATE

0.98+

50QUANTITY

0.98+

HavanaLOCATION

0.98+

10 years agoDATE

0.98+

Intel CorporationORGANIZATION

0.98+

this yearDATE

0.98+

bothQUANTITY

0.97+

first weekQUANTITY

0.97+

2nd 10QUANTITY

0.97+

ThioPERSON

0.97+

MoorePERSON

0.97+

over 60 foldQUANTITY

0.96+

oneQUANTITY

0.96+

pandemicEVENT

0.96+

A W s.ORGANIZATION

0.96+

CubeCOMMERCIAL_ITEM

0.96+

sevenQUANTITY

0.95+

decadesQUANTITY

0.94+

five instancesQUANTITY

0.94+

ThioTITLE

0.93+

over 220 different instanceQUANTITY

0.93+

Dave Brown, Amazon | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah. Welcome to the cubes. Virtual coverage of 80 was reinvent 2020. I'm John for your host. We are the Cube virtual not there in person, but we're doing remote, as is a W s. Although there there on stage live. And we're here with Dave Brown, Vice President of the Sea, to compute. Great to see you again. Great keynote last night, kicking off everything for the opening night. Great stuff. >>Yeah, well, John, it's always good to be on the Cuban. Thanks for having me back. >>You know, you're in the hot seat these days in the sense of there's so much going on. I mean, Andy, that could do a three week announcement. Keynote. It was like in three hours of nonstop you take a break to go The bathroom. You missed two announcements, right? So, so much going on. You opened up reinvent 2020 with your announcement ec2 of mac instances. And there was a ton of compute. And the theme was really you know, reinventing and reimagining compute both. I want to get into that. But let's start with the hard news. Tell me about the Mac instances. Um, you had a great use case there, That kind of illustrated in your talk. But where is this coming from? It's obviously Mac developers are big, but is this market something that you guys saw from customers or was a necessity? Take us through the thinking around the Mac instance. Easy to for Mac instances, um are going for >>absolutely absolutely So I mean me personally Matthews, a longtime Matthews that we've often thought about. Could we ever bring Mac OS to AWS? Right This thing we've spoken about on and off for many, many years and, you know, it was about a year and a half about two years ago. You know, we're always hearing new use cases from customers, and that's kind of what we're doing. So we're saying what a customer is trying to do that we don't support today, and how would we support them in that? And we started a year from customers that they have been able to successfully migrate all of the AWS workloads to AWS. So most of the server workloads to AWS and then they've got this Mac bold workload that they just weren't able to bring to us. We just didn't support Max into. It was a great example who I had on stage with me last night where you know, they over the last couple of years have been moving Ah, lot of their workloads to AWS. And and then they had these Mac money sitting around that they had to manage themselves. And so we said, could we actually do this? And so that was the one thing the customer ask. And the other thing that we realized was with the nitro system in the work that we've been doing there over the last, you know, six years, seven years since 2012, Really? And just where we are from the From nitro system point of view, we were able to wrap a Mac money without making any changes to it with nitro cards plugging a FireWire to the thunderbolt port and and and actually control that device. And so it means that you get the best of Apple hardware, which is what Apple's all about is the hard way that they make and the way that their software works with it. together with the nitric system and the cards around that inte integrating with the rest of AWS. So we're giving you, you know, high speed secure networking. We're giving you great access to elastic block store Was just integrates natively into the magma Nias? Well, a So we realized that the technology was there, the customer asked, was there and then obviously went to Apple and worked with them very closely to make it happen. And so that's kind of how it all came together. And I was incredibly excited to announce it last night. And the feedback today has just been amazing. A lot of excitement. >>Yeah, take me through the use case because, you know, obviously there's two trends going on. There's custom chips and server list kind of thing happening where you guys, I mean, really doing a good job of the eye as layer, innovating there and then platform as a service. All that software on top. I totally get that. You could see that happening. Chips custom ships to Intel, A, M, D. And others. Now you got Mac hardware. Where's the innovation use case because one would start would say, Hey, why don't you care about whether it's Mac hardware or not. Because I'm server lists. I should be programming the infrastructure actually be getting compute generically. Where does the Mac tying come in? Because that's the first question I was thinking of was, I'm a Mac user. I love Mac, but I'm also got some windows actually going on now. And ultimately, do I really care if it's compute? What's your reaction to that? Yeah, >>absolutely. I mean, if you look at Apple's ecosystem today, right, they have millions of applications in the APP store. They have 28 million developers worldwide, actually building those applications just incredible. And many of those applications, all these millions in the In the APP store itself, there's many more applications that are both by enterprises and companies, right? We have an application that we use internally at Amazon is available on my phone. That's not in the APP store, and you know, many companies are doing that and to build applications for the ecosystem, they have to be built on Mac hardware. And that's just how Apple works, right? So if you wanna build for iPad or iPhone or even Apple TV and Apple watch, you have to build those applications on a Mac. And so what we see companies doing is, you know, the old develop a meme off. Well, it works on my computer, right when you build something, you don't wanna be bullied on your local laptop for production. So they typically have a fleet of machines that they either under somebody's desk or in a data center somewhere that they use for for building these Mac applications. And so it's not possible to build a Mac application on anything other than a Mac itself. And we when we looked at it, we really didn't feel that virtualization made sense, right? Apple? I mean, they have some some virtualization that they're able to do within Mac OS itself. But if you think about how do we solve the customer use case, it's really bringing apple hardware too easy to to solve the problem and giving customers that exactly same exact same experience that they have on prep. And if you look into it like that, models just worked right. We gave them better access. Uh, you know, they've been using that data which you normally say, Hey, don't don't run production workloads on a beta. But you know, I found out if I interview with the BPS at Intuit critique that they've actually moved 80% of their production pulled wear clothes too easy to already to run on the Mac instances. And so that, and that's in the space of two months. And so, just as seamless ability to move because it's the same hardware is kind of what we were going >>after. Great, thanks for sharing that and say, one thing I wanna point out is Mac does have their own chips as well. They're going custom chips. Amazon's going custom chips. And I think I think you nailed what I was trying to understand, which is this developer community for Mac. And there's some things that are purpose built for Mac devices. So on Mac ecosystem, get the marketplace as well as you know, that that was the hardware PCs and devices, and they're only doing more and more. So this brings me to the i o t. Um, piece of it, because Apple does make devices that people wear and I watch is, um, iPhones. I mean, they're not computers anymore. They're everything. So this kind of brings up the edge conversation. So whether it's an iPhone or a five G in a Metro or I'm a stadium watching a football game and there's some sensor camera vision industrial thing there, this is the new normal. This is where you guys are kind of eating, eating up the software side that that business, because there's new capabilities here. Can you explain how compute he's, particularly C two gets to the edges because no one wants to move data around. They wanna move, compute, not data, because data is expensive and it's and it's fat. So we we talked about that we keep on years ago, but you gotta move. Compute. So how does that work Take us through your vision? >>Absolutely. And this is This is a massively growing area for us. I mean, you mentioned Apple's new M one silicon Apple silicon that they just launched a swell, and we're super excited about Apple's been doing there. We've been doing the same thing with our grab. It's on two processor and really saving customers. An incredible amount on price performance. Tried customers moving and getting 40% improvement and price performance just by moving to grab it on too. It's just incredible. Um, in terms of the edge, you know, we started this journey. We started this journey quite some time ago and bringing, you know, Lambda functions to cloudwatch and things like that. How do we bring compute to the edge? We took a look at five G, which I think it's gonna feel a lot of this right if if we look at our cell phones today was actually just talking to the Apple team yesterday with the iPhone, only came out, you know, 13 years ago. It's kind of amazing to think just how much progress we've had and what four g did for the device that's in our pocket in terms of, you know, just how much we rely on that today and what we get. Well, five g is just a step function in both in terms of latency, but also in terms of throughput. And so, you know, one of the projects we announced last year with Verizon and we now Andy announced this morning we're also gonna be rolling out with Katy D I and SK Telecom and Vodafone next year. Um is a project always like that brings aws compute to the edge of the telco network. And so with Verizon, we now have eight locations around the U. S. Where we have AWS compute capacity. And what I mean by that is literally C five instances uh, G four GPU instances for customers that want to do influence and graphics processing on the edge. And that's embedded into the five G network on DSO customers. You know, we've got a number of customers that are doing a lot of interesting things with five G in the sports area, where they have five G cameras that are, you know, submitted directly to wavelength. We no longer need to drive a truck to a stadium to record a game. You just have five G cameras, um, to, you know, automated factories where they doing robotics in factories and yet really low latency. And they don't want the computer, the factory they wanted in five G and so just exciting area for us. That's growing really, really quickly. Thea Other thing we did is obviously with local zones. We launched our first local zones in L a X last year, Los Angeles on that's being used by the movie industry, so you know right now is a lot of exciting up and running off the covert and shut down for a period of time and filming the next release of all of our favorite episodes and across all of these various streaming platforms. And a lot of that work is actually the post production is being done on on AWS on G four instances within the Los Angeles region. So, you know, very low agency for colorization animation, special effects, all that sort of things happening there. What we heard from a lot of customers was they loved outposts as well, which is our offering to put a server into a data center. And you heard from riot games in Andy's Keynote, where they actually bought a number of outposts and put them all over the U. S. And also other places of the world to really lower the Leighton see for their latest game. And so what Andy also just announced is the availability off three additional local zones. So Atlanta, Miami and Houston Sorry, Boston Miami in Houston available today, and then additional 12 available local zones next year, and what that does is that sort of spreads AWS capacity compute capacity at the edge in all of our major metropolitan hubs all of their capacities on the AWS backbone as well, but brings customers that low latency connectivity that they're looking for. Gaming developers were, you know, every every millisecond counts in terms of gameplay on so super excited to be going after that use case, which I think, you know, it's difficult to tell what the next 10 years is gonna be like. But I think Layton's he's gonna have a big part to play in the types of applications we see on our phones going forward. >>Great stuff, final question for you as we wrap up, obviously with virtualization with virtualization. But you know, the cove it is. And he pointed out, People are gonna change, is gonna be winners and losers. He kind of clearly pointed out, But the people who do lean into the cloud who have been on the cloud or taking advantage of the tail winds of cove in because of the capabilities there are two bills air higher, and you should be happy for that. But they're also gonna have more demand for you to say, Hey, I need more services. So How do you speak to those people who are leaning in who are leveraging, more, compute? What should they be looking at? What kinds of services should be connecting into compute? How should they be thinking about the future of compute so that they can take advantage of those capabilities? The lower costs, higher performance? What things are complementary for these customers as they come in, not toe dip in the water kind of things against really driving. And what do they need? >>Yeah, absolutely. And this has been a big focus on us. You know, things has bean, as I cover in my keynote, which leadership session that I'm doing tomorrow Wednesday. You know, a lot of this year has been helping customers through covert and what covert is meant for their business. Whether that is cost savings for many of them or whether it's just demand, you know that they've never experienced are expected before. I mean, we've been incredibly hard at work in servicing those customers, right? I actually catch up with Scott Sikora. In my keynote. He leads our capacity team. We talked through what it meant and how we actually provided the capacity that our customers needed during Colbert Times. But for a customer moving to us, the first thing is obviously we wanna find ways to make them very successful in the cloud, but more importantly, lower price performance for them. So what we wanted to do is give them the best possible performance that's available at the lowest possible cost. And if you look at a number of the announcements that Andy made today, you know whether it's our latest graviton processor where you can, you know, when you move to arm. I think customers often overestimate how much work it will be to move to arm. And when I talked to them after they have moved, that's ahead. Wasn't actually that much work. We actually got it up and running relatively quickly. So what's simpler than people expect? But that's an opportunity to save 40% on price performance. You know these new newer workloads like our graphics. We just launched a new G four a D, which is an AMG based GPU solution, the first time we have had an AMG GPU on the EEC too. And that's also looking to say, if you know upwards of 40% price performance of other GPU offering so just incredibly exciting for graphics, work, clothes and then in the machine learning space. Like I think, if you know, machine learning is just become the new normal, like everybody is doing it. And you know, just three years ago, everybody was thinking about whether they should do it. How would how they would use it Now that it's a lot of companies are doing it. It's really How do you How do I use it more? And that comes down to again saving costs. And so what we know with without Inferential Chip and then the new Habbaniya chip we just announced it with with the work with Intel that we're doing and then a new trainee, um, ship for training, training. We're really working to lower the cost of machine learning. And so, like we've seen many customers like Alexa was a great use case the other day. Being able to lower the cost of inference for Alexa by 35% again just helps customers, you know, move to the cloud. But I mean, just generally, you know, we're trying to support customers everywhere where there were, you know, if there are many customers are in their own data centers looking to move to AWS. You know, we have great models that can support them with our existing compute. A new savings plan offering we announced last year just great for saving costs on getting the price down So a lot. You can look at it. You know, I could go on forever. Really. It >>Certainly it's certainly is MAWR. We'll we'll do a deeper dive follow up after reinvent, but it is a wake up call. As I wrote in my post, um, for a cloud on Finally, I've been saying this for years. Horizontal scalability is a disruption on the infrastructure side, but you've got vertical specialization with data to create great modern apse of machine learning. And I actually playing out in full display here is Andy said, um, net right now. So all this benefits and all these opportunities to disrupt horizontally and then leverage the data all tied together, all coming together. You're clear. Leading the team. Great Brown, vice president of E C. Two in charge of the team that's driving the future. Compute. Thanks for coming on The Cube Cube Live coverage. Thanks. >>Thanks for having me. >>Okay. I'm John for the Q back for more live coverage after this short break

Published Date : Dec 2 2020

SUMMARY :

Great to see you again. Thanks for having me back. And the theme was really you know, And so it means that you get the best of Apple hardware, which is what Apple's all about is the hard Where's the innovation use case because one would start would say, Hey, why don't you care And so what we see companies doing is, you know, So on Mac ecosystem, get the marketplace as well as you know, that that was the hardware PCs And so, you know, one of the projects we announced last year But you know, the cove it is. And that's also looking to say, if you know upwards of 40% price performance of And I actually playing out in full display here is Andy said, um,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave BrownPERSON

0.99+

AndyPERSON

0.99+

AWSORGANIZATION

0.99+

SK TelecomORGANIZATION

0.99+

40%QUANTITY

0.99+

VerizonORGANIZATION

0.99+

VodafoneORGANIZATION

0.99+

JohnPERSON

0.99+

80%QUANTITY

0.99+

AppleORGANIZATION

0.99+

next yearDATE

0.99+

Scott SikoraPERSON

0.99+

MiamiLOCATION

0.99+

two monthsQUANTITY

0.99+

last yearDATE

0.99+

IntelORGANIZATION

0.99+

HoustonLOCATION

0.99+

AmazonORGANIZATION

0.99+

AtlantaLOCATION

0.99+

yesterdayDATE

0.99+

MatthewsPERSON

0.99+

iPadCOMMERCIAL_ITEM

0.99+

three hoursQUANTITY

0.99+

U. S.LOCATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

first questionQUANTITY

0.99+

Los AngelesLOCATION

0.99+

three weekQUANTITY

0.99+

U. S.LOCATION

0.99+

35%QUANTITY

0.99+

MacCOMMERCIAL_ITEM

0.99+

two announcementsQUANTITY

0.99+

two billsQUANTITY

0.99+

todayDATE

0.99+

millionsQUANTITY

0.99+

Katy D IORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

2020DATE

0.99+

last nightDATE

0.99+

five GCOMMERCIAL_ITEM

0.98+

28 million developersQUANTITY

0.98+

13 years agoDATE

0.98+

three years agoDATE

0.98+

LaytonPERSON

0.98+

bothQUANTITY

0.98+

tomorrow WednesdayDATE

0.98+

12 available local zonesQUANTITY

0.98+

oneQUANTITY

0.98+

first timeQUANTITY

0.98+

iPhonesCOMMERCIAL_ITEM

0.98+

Ken Holtz and Benito Lopez, Red Hat | Kubecon + CloudNativeCon NA 2020


 

from around the globe it's thecube with coverage of kubecon and cloudnativecon north america 2020 virtual brought to you by red hat the cloud native computing foundation and ecosystem partners welcome to thecube's coverage of kubecon and cloudnativecon 2020 the virtual edition i'm lisa martin i've got a couple of guests with me here today please welcome ken holtz the principal partner manager for red hat hey ken and welcome to the cube hi lisa thank you and benito lopez is also joining us senior manager of business development and the solutions provider services provider vertical excuse me f5 from f5 hi benito how are how are you i'm good you're in san francisco thank you all right yes we're all very socially distanced so guys kubecon cloudnativecon the virtual version here still the opportunity to engage with a lot of leaders in the community folks interested let's go ahead and start with you as we look at this very dynamic environment in which we are all living and working organizations are under even more pressure to deliver the information and the services and the experiences that customers demand internal customers external customers i know f5 is known for load balancing and load balancing is one of those tools that can certainly help with that but talk to us about what's kind of going on what's new in that respect from fbi's perspective we have evolved into an adaptive application services company what do i mean by adaptive application services it's the ability to scale secure and protect application applications wherever they may recite whether they're in the far edge whether in the cloud whether they're on premises and the ability to also observe the the analytics and telemetry emanating from those applications to be able to act upon what we see in that space so when we talk about service based architecture it's all about no longer being reliant on a in the on a single vendor on a monolithic application set of services or on what they call a vertical stack appliance service based architecture means you want it to be a scalable architecture whereby you can add the dock subtract um different types of network functions in 5g so the way this is going to be depend the the key enabler for a services-based architecture is going to be container based services whereby services will no longer just be applications are going to be disaggregated into micro services right in container clusters and f5's role here is to be able to scale and secure that traffic into a service provider environment more importantly our role is to turn a container-based architecture which is not service provider grade into a service provider-grade architecture which means we can actually see the services provide specific protocols into that container cluster and more importantly um scale and secure and apply the right policies within a containerized environment again containers is all about a service base is part of a service based architecture and containers today especially on kubernetes need a service provider grade platform of which we provide that market all right so kubernetes seeing a lot of activity with telco customers what are some of the challenges major we'll stick with you for another few seconds here what are some of the challenges that you're seeing that you're helping customers to work through well one is the first challenge is how do you make kubernetes telco great that's the first challenge so what f5 does is we actually um act as the ingress and egress point into kubernetes environment whereby we see telco as we were able to scale and secure telco specific protocols that kubernetes today um does not support and we work closely with red hat in that space um together with their open shift architecture to open shift platform cut we work with red hat today uh with um uh with respect to the openshift platform and that helps the service provider have a telco cloud-like platform that is um scalable that is secure and that is highly performant and low-latent all right so speaking of red hat let's bring ken into the conversation here kind of same question for you as we look at the activity uh in telco with respect to kubernetes let's talk to some of the ways that that red hat is helping customers address some of the challenges so that they can leverage that technology to to really move their businesses forward especially in such a dynamic environment right now thanks lisa so red hat has a goal of ensuring our openshift platform is ready and hardened enough to enable telco workloads for our 5g platform while we work with other partners f5 has been one of our key partners in this particular space for the first time openshift networking is natively integrating seamlessly with the commercial load balancer from f5 making it ready for telco 5g this is a co-engineered co-developed solution a new piece of software that we've implemented together oven kubernetes is enterprise and service provider ready we believe ovn will help significantly with latency overall and this is an evolution we have our first implementation of this now and we're working now on making this even more cloud-native which means making it more performant more resilient and even more capable and ready for telco grade requirements so can continuing on with you for a second in terms of how you're working together with customers to maybe customize or adapt the technologies can you talk to me a little bit about some of the customer feedback like some of those challenges that they're facing in today's environment which as we know is so dynamic and probably going to be for a while what's the customer like influence in terms of the partnership and the code development well so my focus at red hat is on partnership and the ecosystem partner management team allows red hat to meet the needs of a growing number of red hat partners the team serves as a partner's single point of contact for product questions roadmap updates engineering interlocks and general guidance for how to partner with red hat and with open source communities to achieve their business goals so uh we we're we're helping the end customers through our tight partnership imagine a lot of collaboration there so benito let's talk from your perspective from f5's perspective on the partnership and the collaboration that you have together and with your customers to help them be successful well ecosystems partnerships are going to be critical for our success as a company and more importantly as service providers today especially as i mentioned earlier around with respect to us they migrate and transform their networks from 4g to 5g um the architecture is going to horizontalize it's going to require a telcograde type of infrastructure manager a telcograde os and at the same time it's going to require a telco grade um and security platform and therefore red hat with its um them with them being what we call as a leader in open source and open and containers with their openshift platform we see them as a vital partner in working with service providers to transform their networks into a teleco great containerized environment right so as they migrate into um as they migrate from just software virtualization to containerization which is going to be critical for 5g um red hat is a key partner for us to work with to ensure that their network is their containerized network is telego-grade and highly performant and secure excellent thanks and ken back to you i know the audience would like to hear kind of some more specifics on the collaboration between you guys and also kind of beyond what they can see what's coming down the pipe in terms of open source projects or kind of beyond that yeah so some of some examples of our work together uh would include joint roadmap alignment uh we're very closely tied together on on the roadmap front early pre-pre-ga enablement early access to code and we have a goal of achieving certification here so we'd like to to achieve certification which provides assurance of compatibility and support avoids vendor lock-in and dispels any security concerns that customers may have excellent well guys anything else that you want to add here to the audience that is attending this virtual edition of kubecon cloud nativecon 2020 benito to you well i'd like to just say that as you migrate to as your network begins to transform and you are looking at the containerized architecture f5 and red hat are your best partners to have that telco grade architecture infrastructure in place i like that both statement very well put ken less thoughts from you i think benito said it best and i just wanted to say thanks a lot for having having us and this has been fun excellent guys thank you for sharing what's going on with the f5 red hat partnership how you're helping customers in telco with kubernetes the challenges there to alleviate ken bonito thanks for joining me on thecube today thank you thank you for my guests i'm lisa martin and you're watching thecube you

Published Date : Nov 20 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
san franciscoLOCATION

0.99+

ken holtzPERSON

0.99+

Benito LopezPERSON

0.99+

telcoORGANIZATION

0.99+

Ken HoltzPERSON

0.99+

lisaPERSON

0.99+

lisa martinPERSON

0.99+

first challengeQUANTITY

0.99+

todayDATE

0.99+

kenPERSON

0.98+

benito lopezPERSON

0.98+

bothQUANTITY

0.98+

benitoPERSON

0.98+

Red HatORGANIZATION

0.97+

red hatORGANIZATION

0.97+

red hatORGANIZATION

0.97+

kubeconORGANIZATION

0.97+

first timeQUANTITY

0.97+

north americaLOCATION

0.96+

oneQUANTITY

0.95+

f5ORGANIZATION

0.95+

2020DATE

0.94+

fbiORGANIZATION

0.94+

kubernetesORGANIZATION

0.94+

first implementationQUANTITY

0.92+

telecoORGANIZATION

0.85+

CloudNativeConEVENT

0.83+

ken bonitoPERSON

0.82+

cloudnativeconORGANIZATION

0.82+

benitoORGANIZATION

0.79+

fewQUANTITY

0.79+

KubeconORGANIZATION

0.79+

single vendorQUANTITY

0.75+

couple of guestsQUANTITY

0.73+

secondQUANTITY

0.73+

NA 2020EVENT

0.72+

5gQUANTITY

0.7+

single pointQUANTITY

0.69+

f5 redCOMMERCIAL_ITEM

0.65+

telcograde osCOMMERCIAL_ITEM

0.64+

5gCOMMERCIAL_ITEM

0.57+

telco gradeORGANIZATION

0.54+

cloudnativeconEVENT

0.53+

computingORGANIZATION

0.52+

f5TITLE

0.5+

4gQUANTITY

0.44+

5gOTHER

0.43+