Image Title

Search Results for six mesh:

Phil Kippen, Snowflake, Dave Whittington, AT&T & Roddy Tranum, AT&T | | MWC Barcelona 2023


 

(gentle music) >> Narrator: "TheCUBE's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> Hello everybody, welcome back to day four of "theCUBE's" coverage of MWC '23. We're here live at the Fira in Barcelona. Wall-to-wall coverage, John Furrier is in our Palo Alto studio, banging out all the news. Really, the whole week we've been talking about the disaggregation of the telco network, the new opportunities in telco. We're really excited to have AT&T and Snowflake here. Dave Whittington is the AVP, at the Chief Data Office at AT&T. Roddy Tranum is the Assistant Vice President, for Channel Performance Data and Tools at AT&T. And Phil Kippen, the Global Head Of Industry-Telecom at Snowflake, Snowflake's new telecom business. Snowflake just announced earnings last night. Typical Scarpelli, they beat earnings, very conservative guidance, stocks down today, but we like Snowflake long term, they're on that path to 10 billion. Guys, welcome to "theCUBE." Thanks so much >> Phil: Thank you. >> for coming on. >> Dave and Roddy: Thanks Dave. >> Dave, let's start with you. The data culture inside of telco, We've had this, we've been talking all week about this monolithic system. Super reliable. You guys did a great job during the pandemic. Everything shifting to landlines. We didn't even notice, you guys didn't miss a beat. Saved us. But the data culture's changing inside telco. Explain that. >> Well, absolutely. So, first of all IoT and edge processing is bringing forth new and exciting opportunities all the time. So, we're bridging the world between a lot of the OSS stuff that we can do with edge processing. But bringing that back, and now we're talking about working, and I would say traditionally, we talk data warehouse. Data warehouse and big data are now becoming a single mesh, all right? And the use cases and the way you can use those, especially I'm taking that edge data and bringing it back over, now I'm running AI and ML models on it, and I'm pushing back to the edge, and I'm combining that with my relational data. So that mesh there is making all the difference. We're getting new use cases that we can do with that. And it's just, and the volume of data is immense. >> Now, I love ChatGPT, but I'm hoping your data models are more accurate than ChatGPT. I never know. Sometimes it's really good, sometimes it's really bad. But enterprise, you got to be clean with your AI, don't you? >> Not only you have to be clean, you have to monitor it for bias and be ethical about it. We're really good about that. First of all with AT&T, our brand is Platinum. We take care of that. So, we may not be as cutting-edge risk takers as others, but when we go to market with an AI or an ML or a product, it's solid. >> Well hey, as telcos go, you guys are leaning into the Cloud. So I mean, that's a good starting point. Roddy, explain your role. You got an interesting title, Channel Performance Data and Tools, what's that all about? >> So literally anything with our consumer, retail, concenters' channels, all of our channels, from a data perspective and metrics perspective, what it takes to run reps, agents, all the way to leadership levels, scorecards, how you rank in the business, how you're driving the business, from sales, service, customer experience, all that data infrastructure with our great partners on the CDO side, as well as Snowflake, that comes from my team. >> And that's traditionally been done in a, I don't mean the pejorative, but we're talking about legacy, monolithic, sort of data warehouse technologies. >> Absolutely. >> We have a love-hate relationship with them. It's what we had. It's what we used, right? And now that's evolving. And you guys are leaning into the Cloud. >> Dramatic evolution. And what Snowflake's enabled for us is impeccable. We've talked about having, people have dreamed of one data warehouse for the longest time and everything in one system. Really, this is the only way that becomes a reality. The more you get in Snowflake, we can have golden source data, and instead of duplicating that 50 times across AT&T, it's in one place, we just share it, everybody leverages it, and now it's not duplicated, and the process efficiency is just incredible. >> But it really hinges on that separation of storage and compute. And we talk about the monolithic warehouse, and one of the nightmares I've lived with, is having a monolithic warehouse. And let's just go with some of my primary, traditional customers, sales, marketing and finance. They are leveraging BSS OSS data all the time. For me to coordinate a deployment, I have to make sure that each one of these units can take an outage, if it's going to be a long deployment. With the separation of storage, compute, they own their own compute cluster. So I can move faster for these people. 'Cause if finance, I can implement his code without impacting finance or marketing. This brings in CI/CD to more reality. It brings us faster to market with more features. So if he wants to implement a new comp plan for the field reps, or we're reacting to the marketplace, where one of our competitors has done something, we can do that in days, versus waiting weeks or months. >> And we've reported on this a lot. This is the brilliance of Snowflake's founders, that whole separation >> Yep. >> from compute and data. I like Dave, that you're starting with sort of the business flexibility, 'cause there's a cost element of this too. You can dial down, you can turn off compute, and then of course the whole world said, "Hey, that's a good idea." And a VC started throwing money at Amazon, but Redshift said, "Oh, we can do that too, sort of, can't turn off the compute." But I want to ask you Phil, so, >> Sure. >> it looks from my vantage point, like you're taking your Data Cloud message which was originally separate compute from storage simplification, now data sharing, automated governance, security, ultimately the marketplace. >> Phil: Right. >> Taking that same model, break down the silos into telecom, right? It's that same, >> Mm-hmm. >> sorry to use the term playbook, Frank Slootman tells me he doesn't use playbooks, but he's not a pattern matcher, but he's a situational CEO, he says. But the situation in telco calls for that type of strategy. So explain what you guys are doing in telco. >> I think there's, so, what we're launching, we launched last week, and it really was three components, right? So we had our platform as you mentioned, >> Dave: Mm-hmm. >> and that platform is being utilized by a number of different companies today. We also are adding, for telecom very specifically, we're adding capabilities in marketplace, so that service providers can not only use some of the data and apps that are in marketplace, but as well service providers can go and sell applications or sell data that they had built. And then as well, we're adding our ecosystem, it's telecom-specific. So, we're bringing partners in, technology partners, and consulting and services partners, that are very much focused on telecoms and what they do internally, but also helping them monetize new services. >> Okay, so it's not just sort of generic Snowflake into telco? You have specific value there. >> We're purposing the platform specifically for- >> Are you a telco guy? >> I am. You are, okay. >> Total telco guy absolutely. >> So there you go. You see that Snowflake is actually an interesting organizational structure, 'cause you're going after verticals, which is kind of rare for a company of your sort of inventory, I'll say, >> Absolutely. >> I don't mean that as a negative. (Dave laughs) So Dave, take us through the data journey at AT&T. It's a long history. You don't have to go back to the 1800s, but- (Dave laughs) >> Thank you for pointing out, we're a 149-year-old company. So, Jesse James was one of the original customers, (Dave laughs) and we have no longer got his data. So, I'll go back. I've been 17 years singular AT&T, and I've watched it through the whole journey of, where the monolithics were growing, when the consolidation of small, wireless carriers, and we went through that boom. And then we've gone through mergers and acquisitions. But, Hadoop came out, and it was going to solve all world hunger. And we had all the aspects of, we're going to monetize and do AI and ML, and some of the things we learned with Hadoop was, we had this monolithic warehouse, we had this file-based-structured Hadoop, but we really didn't know how to bring this all together. And we were bringing items over to the relational, and we were taking the relational and bringing it over to the warehouse, and trying to, and it was a struggle. Let's just go there. And I don't think we were the only company to struggle with that, but we learned a lot. And so now as tech is finally emerging, with the cloud, companies like Snowflake, and others that can handle that, where we can create, we were discussing earlier, but it becomes more of a conducive mesh that's interoperable. So now we're able to simplify that environment. And the cloud is a big thing on that. 'Cause you could not do this on-prem with on-prem technologies. It would be just too cost prohibitive, and too heavy of lifting, going back and forth, and managing the data. The simplicity the cloud brings with a smaller set of tools, and I'll say in the data space specifically, really allows us, maybe not a single instance of data for all use cases, but a greatly reduced ecosystem. And when you simplify your ecosystem, you simplify speed to market and data management. >> So I'm going to ask you, I know it's kind of internal organizational plumbing, but it'll inform my next question. So, Dave, you're with the Chief Data Office, and Roddy, you're kind of, you all serve in the business, but you're really serving the, you're closer to those guys, they're banging on your door for- >> Absolutely. I try to keep the 130,000 users who may or may not have issues sometimes with our data and metrics, away from Dave. And he just gets a call from me. >> And he only calls when he has a problem. He's never wished me happy birthday. (Dave and Phil laugh) >> So the reason I asked that is because, you describe Dave, some of the Hadoop days, and again love-hate with that, but we had hyper-specialized roles. We still do. You've got data engineers, data scientists, data analysts, and you've got this sort of this pipeline, and it had to be this sequential pipeline. I know Snowflake and others have come to simplify that. My question to you is, how is that those roles, how are those roles changing? How is data getting closer to the business? Everybody talks about democratizing business. Are you doing that? What's a real use example? >> From our perspective, those roles, a lot of those roles on my team for years, because we're all about efficiency, >> Dave: Mm-hmm. >> we cut across those areas, and always have cut across those areas. So now we're into a space where things have been simplified, data processes and copying, we've gone from 40 data processes down to five steps now. We've gone from five steps to one step. We've gone from days, now take hours, hours to minutes, minutes to seconds. Literally we're seeing that time in and time out with Snowflake. So these resources that have spent all their time on data engineering and moving data around, are now freed up more on what they have skills for and always have, the data analytics area of the business, and driving the business forward, and new metrics and new analysis. That's some of the great operational value that we've seen here. As this simplification happens, it frees up brain power. >> So, you're pumping data from the OSS, the BSS, the OKRs everywhere >> Everywhere. >> into Snowflake? >> Scheduling systems, you name it. If you can think of what drives our retail and centers and online, all that data, scheduling system, chat data, call center data, call detail data, all of that enters into this common infrastructure to manage the business on a day in and day out basis. >> How are the roles and the skill sets changing? 'Cause you're doing a lot less ETL, you're doing a lot less moving of data around. There were guys that were probably really good at that. I used to joke in the, when I was in the storage world, like if your job is bandaging lungs, you need to look for a new job, right? So, and they did and people move on. So, are you able to sort of redeploy those assets, and those people, those human resources? >> These folks are highly skilled. And we were talking about earlier, SQL hasn't gone away. Relational databases are not going away. And that's one thing that's made this migration excellent, they're just transitioning their skills. Experts in legacy systems are now rapidly becoming experts on the Snowflake side. And it has not been that hard a transition. There are certainly nuances, things that don't operate as well in the cloud environment that we have to learn and optimize. But we're making that transition. >> Dave: So just, >> Please. >> within the Chief Data Office we have a couple of missions, and Roddy is a great partner and an example of how it works. We try to bring the data for democratization, so that we have one interface, now hopefully know we just have a logical connection back to these Snowflake instances that we connect. But we're providing that governance and cleansing, and if there's a business rule at the enterprise level, we provide it. But the goal at CDO is to make sure that business units like Roddy or marketing or finance, that they can come to a platform that's reliable, robust, and self-service. I don't want to be in his way. So I feel like I'm providing a sub-level of platform, that he can come to and anybody can come to, and utilize, that they're not having to go back and undo what's in Salesforce, or ServiceNow, or in our billers. So, I'm sort of that layer. And then making sure that that ecosystem is robust enough for him to use. >> And that self-service infrastructure is predominantly through the Azure Cloud, correct? >> Dave: Absolutely. >> And you work on other clouds, but it's predominantly through Azure? >> We're predominantly in Azure, yeah. >> Dave: That's the first-party citizen? >> Yeah. >> Okay, I like to think in terms sometimes of data products, and I know you've mentioned upfront, you're Gold standard or Platinum standard, you're very careful about personal information. >> Dave: Yeah. >> So you're not trying to sell, I'm an AT&T customer, you're not trying to sell my data, and make money off of my data. So the value prop and the business case for Snowflake is it's simpler. You do things faster, you're in the cloud, lower cost, et cetera. But I presume you're also in the business, AT&T, of making offers and creating packages for customers. I look at those as data products, 'cause it's not a, I mean, yeah, there's a physical phone, but there's data products behind it. So- >> It ultimately is, but not everybody always sees it that way. Data reporting often can be an afterthought. And we're making it more on the forefront now. >> Yeah, so I like to think in terms of data products, I mean even if the financial services business, it's a data business. So, if we can think about that sort of metaphor, do you see yourselves as data product builders? Do you have that, do you think about building products in that regard? >> Within the Chief Data Office, we have a data product team, >> Mm-hmm. >> and by the way, I wouldn't be disingenuous if I said, oh, we're very mature in this, but no, it's where we're going, and it's somewhat of a journey, but I've got a peer, and their whole job is to go from, especially as we migrate from cloud, if Roddy or some other group was using tables three, four and five and joining them together, it's like, "Well look, this is an offer for data product, so let's combine these and put it up in the cloud, and here's the offer data set product, or here's the opportunity data product," and it's a journey. We're on the way, but we have dedicated staff and time to do this. >> I think one of the hardest parts about that is the organizational aspects of it. Like who owns the data now, right? It used to be owned by the techies, and increasingly the business lines want to have access, you're providing self-service. So there's a discussion about, "Okay, what is a data product? Who's responsible for that data product? Is it in my P&L or your P&L? Somebody's got to sign up for that number." So, it sounds like those discussions are taking place. >> They are. And, we feel like we're more the, and CDO at least, we feel more, we're like the guardians, and the shepherds, but not the owners. I mean, we have a role in it all, but he owns his metrics. >> Yeah, and even from our perspective, we see ourselves as an enabler of making whatever AT&T wants to make happen in terms of the key products and officers' trade-in offers, trade-in programs, all that requires this data infrastructure, and managing reps and agents, and what they do from a channel performance perspective. We still ourselves see ourselves as key enablers of that. And we've got to be flexible, and respond quickly to the business. >> I always had empathy for the data engineer, and he or she had to service all these different lines of business with no business context. >> Yeah. >> Like the business knows good data from bad data, and then they just pound that poor individual, and they're like, "Okay, I'm doing my best. It's just ones and zeros to me." So, it sounds like that's, you're on that path. >> Yeah absolutely, and I think, we do have refined, getting more and more refined owners of, since Snowflake enables these golden source data, everybody sees me and my organization, channel performance data, go to Roddy's team, we have a great team, and we go to Dave in terms of making it all happen from a data infrastructure perspective. So we, do have a lot more refined, "This is where you go for the golden source, this is where it is, this is who owns it. If you want to launch this product and services, and you want to manage reps with it, that's the place you-" >> It's a strong story. So Chief Data Office doesn't own the data per se, but it's your responsibility to provide the self-service infrastructure, and make sure it's governed properly, and in as automated way as possible. >> Well, yeah, absolutely. And let me tell you more, everybody talks about single version of the truth, one instance of the data, but there's context to that, that we are taking, trying to take advantage of that as we do data products is, what's the use case here? So we may have an entity of Roddy as a prospective customer, and we may have a entity of Roddy as a customer, high-value customer over here, which may have a different set of mix of data and all, but as a data product, we can then create those for those specific use cases. Still point to the same data, but build it in different constructs. One for marketing, one for sales, one for finance. By the way, that's where your data engineers are struggling. >> Yeah, yeah, of course. So how do I serve all these folks, and really have the context-common story in telco, >> Absolutely. >> or are these guys ahead of the curve a little bit? Or where would you put them? >> I think they're definitely moving a lot faster than the industry is generally. I think the enabling technologies, like for instance, having that single copy of data that everybody sees, a single pane of glass, right, that's definitely something that everybody wants to get to. Not many people are there. I think, what AT&T's doing, is most definitely a little bit further ahead than the industry generally. And I think the successes that are coming out of that, and the learning experiences are starting to generate momentum within AT&T. So I think, it's not just about the product, and having a product now that gives you a single copy of data. It's about the experiences, right? And now, how the teams are getting trained, domains like network engineering for instance. They typically haven't been a part of data discussions, because they've got a lot of data, but they're focused on the infrastructure. >> Mm. >> So, by going ahead and deploying this platform, for platform's purpose, right, and the business value, that's one thing, but also to start bringing, getting that experience, and bringing new experience in to help other groups that traditionally hadn't been data-centric, that's also a huge step ahead, right? So you need to enable those groups. >> A big complaint of course we hear at MWC from carriers is, "The over-the-top guys are killing us. They're riding on our networks, et cetera, et cetera. They have all the data, they have all the client relationships." Do you see your client relationships changing as a result of sort of your data culture evolving? >> Yes, I'm not sure I can- >> It's a loaded question, I know. >> Yeah, and then I, so, we want to start embedding as much into our network on the proprietary value that we have, so we can start getting into that OTT play, us as any other carrier, we have distinct advantages of what we can do at the edge, and we just need to start exploiting those. But you know, 'cause whether it's location or whatnot, so we got to eat into that. Historically, the network is where we make our money in, and we stack the services on top of it. It used to be *69. >> Dave: Yeah. >> If anybody remembers that. >> Dave: Yeah, of course. (Dave laughs) >> But you know, it was stacked on top of our network. Then we stack another product on top of it. It'll be in the edge where we start providing distinct values to other partners as we- >> I mean, it's a great business that you're in. I mean, if they're really good at connectivity. >> Dave: Yeah. >> And so, it sounds like it's still to be determined >> Dave: Yeah. >> where you can go with this. You have to be super careful with private and for personal information. >> Dave: Yep. >> Yeah, but the opportunities are enormous. >> There's a lot. >> Yeah, particularly at the edge, looking at, private networks are just an amazing opportunity. Factories and name it, hospital, remote hospitals, remote locations. I mean- >> Dave: Connected cars. >> Connected cars are really interesting, right? I mean, if you start communicating car to car, and actually drive that, (Dave laughs) I mean that's, now we're getting to visit Xen Fault Tolerance people. This is it. >> Dave: That's not, let's hold the traffic. >> Doesn't scare me as much as we actually learn. (all laugh) >> So how's the show been for you guys? >> Dave: Awesome. >> What're your big takeaways from- >> Tremendous experience. I mean, someone who doesn't go outside the United States much, I'm a homebody. The whole experience, the whole trip, city, Mobile World Congress, the technologies that are out here, it's been a blast. >> Anything, top two things you learned, advice you'd give to others, your colleagues out in general? >> In general, we talked a lot about technologies today, and we talked a lot about data, but I'm going to tell you what, the accelerator that you cannot change, is the relationship that we have. So when the tech and the business can work together toward a common goal, and it's a partnership, you get things done. So, I don't know how many CDOs or CIOs or CEOs are out there, but this connection is what accelerates and makes it work. >> And that is our audience Dave. I mean, it's all about that alignment. So guys, I really appreciate you coming in and sharing your story in "theCUBE." Great stuff. >> Thank you. >> Thanks a lot. >> All right, thanks everybody. Thank you for watching. I'll be right back with Dave Nicholson. Day four SiliconANGLE's coverage of MWC '23. You're watching "theCUBE." (gentle music)

Published Date : Mar 2 2023

SUMMARY :

that drive human progress. And Phil Kippen, the Global But the data culture's of the OSS stuff that we But enterprise, you got to be So, we may not be as cutting-edge Channel Performance Data and all the way to leadership I don't mean the pejorative, And you guys are leaning into the Cloud. and the process efficiency and one of the nightmares I've lived with, This is the brilliance of the business flexibility, like you're taking your Data Cloud message But the situation in telco and that platform is being utilized You have specific value there. I am. So there you go. I don't mean that as a negative. and some of the things we and Roddy, you're kind of, And he just gets a call from me. (Dave and Phil laugh) and it had to be this sequential pipeline. and always have, the data all of that enters into How are the roles and in the cloud environment that But the goal at CDO is to and I know you've mentioned upfront, So the value prop and the on the forefront now. I mean even if the and by the way, I wouldn't and increasingly the business and the shepherds, but not the owners. and respond quickly to the business. and he or she had to service Like the business knows and we go to Dave in terms doesn't own the data per se, and we may have a entity and really have the and having a product now that gives you and the business value, that's one thing, They have all the data, on the proprietary value that we have, Dave: Yeah, of course. It'll be in the edge business that you're in. You have to be super careful Yeah, but the particularly at the edge, and actually drive that, let's hold the traffic. much as we actually learn. the whole trip, city, is the relationship that we have. and sharing your story in "theCUBE." Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave WhittingtonPERSON

0.99+

Frank SlootmanPERSON

0.99+

RoddyPERSON

0.99+

AmazonORGANIZATION

0.99+

PhilPERSON

0.99+

Phil KippenPERSON

0.99+

AT&TORGANIZATION

0.99+

Jesse JamesPERSON

0.99+

AT&T.ORGANIZATION

0.99+

five stepsQUANTITY

0.99+

Dave NicholsonPERSON

0.99+

John FurrierPERSON

0.99+

50 timesQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

Roddy TranumPERSON

0.99+

10 billionQUANTITY

0.99+

one stepQUANTITY

0.99+

17 yearsQUANTITY

0.99+

130,000 usersQUANTITY

0.99+

United StatesLOCATION

0.99+

1800sDATE

0.99+

last weekDATE

0.99+

BarcelonaLOCATION

0.99+

Palo AltoLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

last nightDATE

0.99+

MWC '23EVENT

0.98+

telcoORGANIZATION

0.98+

one systemQUANTITY

0.98+

oneQUANTITY

0.98+

40 data processesQUANTITY

0.98+

todayDATE

0.98+

one placeQUANTITY

0.97+

P&LORGANIZATION

0.97+

telcosORGANIZATION

0.97+

CDOORGANIZATION

0.97+

149-year-oldQUANTITY

0.97+

fiveQUANTITY

0.97+

singleQUANTITY

0.96+

three componentsQUANTITY

0.96+

OneQUANTITY

0.96+

Welcome to Supercloud2


 

(bright upbeat melody) >> Hello everyone, welcome back to Supercloud2. I'm John Furrier, my co-host Dave Vellante, here at theCUBE in Palo Alto, California, for our live stage performance all day for Supercloud2. Unpacking this next generation movement in cloud computing. Dave, Supercloud1 was in August. We had great response and acceleration of that momentum. We had some haters too. We had some folks out there throwing shade on this. But at the same time, a lot of leaders came out of the woodwork, a lot of practitioners. And this Supercloud2 event I think will expose and illustrate some of the examples of what's happening in the industry and more importantly, kind of where it's going. >> Well it's great to be back in our studios in Palo Alto, John. Seems like just yesterday was August 9th, where the community was really refining the definition of Super Cloud. We were identifying the essential characteristics, with some of the leading technologists in Silicon Valley. We were digging into the deployment models. Whereas this Supercloud, Supercloud2 is really taking a practitioner view. We're going to hear from Walmart today. They've built a Supercloud. They called it the Walmart Cloud native platform. We're going to hear from other data practitioners, like Saks. We're going to hear from Western Union. They've got 200 locations around the world, how they're dealing with data sovereignty. And of course we've got some local technologists and practitioners coming in, analysts, consultants, theCUBE community. I'm really excited to be here. >> And we've got some great keynotes from executives at VMware. We're going to expose some of the things that they're working on around cross cloud services, which leads into multicloud. I think the practitioner angle highlights my favorite part of this program, 'cause you're starting to see the builders, a term coined by Andy Jassy, early days of AWS. That builder movement has been continuing to go. And you're seeing the enterprise, global enterprises adopt this builder mentality with Cloud Native. This is going to power the next generation global economy. And I think the role of the cloud computing vendors like AWS, Azure, Google, Alibaba are going to be the source engine of innovation. And what gets built on top of and with the clouds will be a big significant market value for all businesses and their business models. So I think the market wants the supercloud, the business models are pointing to Supercloud. The technology needs supercloud. And society, from an economic standpoint and from a use case standpoint, needs supercloud. You're seeing it today. Everyone's talking about chat GPT. This is an example of what will come out of this next generation and it's just getting started. So to me, you're either on the supercloud side of the camp or you're on the old school, hugging onto the old school mentality of wait a minute, that's cloud computing. So I think if you're not on the super cloud wave, you're going to be driftwood. And that's a term coined by Pat Gelsinger. And this is really the reality. Are you on the super cloud side? Or are you on the old huggin' the old model? And that's going to be a determinant. And you're going to see who's going to be the players on that, Dave. This is going to be a real big year. >> Everybody's heard the phrase follow the money. Well, my philosophy is follow the data. And that's a big part of what Supercloud2 is, because the data is where the money is across the clouds. And people want more simplicity, or greater simplicity across the clouds. So it's really, there's two forces here. You've got the ecosystem that's saying, hey the hyperscalers, they've done a great job but there's problems that they're not solving. So we're going to lean in and solve those problems. At the same time, you have the practitioners saying we have multicloud, we have to deal with this, help us. It's got to be simpler. Because we want to share data across clouds. We want to build data products, we want to monetize and drive revenue and cut costs. >> This is the key thing. The builder movement is hitting a wall, and that wall will be broken down because the business models of the companies themselves are demanding that the value from the data with security has to be embedded. So I think you're going to see a big year this next year or so where the builders will accelerate through this next generation, supercloud wave, will be a builder's wave for business. And I think that's going to be the nuance here. And all the people that are on the side of Supercloud are all pro-business, pro-technology. The ones that aren't are like, wait a minute I used to do things differently. They're stuck. And so I think this is going to be a question of are we stuck? Are builders accelerating? Will the business models develop around it? That's digital transformation. At the end of the day, the market's speaking, Dave. The market wants more. Chat GPT, you're seeing AI starting to flourish, powered by data. It's unstoppable, supercloud's unstoppable. >> One of our headliners today is Zhamak Dehghani, the creator of Data Mesh. We've got some news around her. She's going to be live in studio. Super excited about that. Kit Colbert in Supercloud, the first Supercloud in last August, laid out an initial architecture for Supercloud. He's going to advance that today, tell us what's changed, and really dig into and really talk about the meat on the bone, if you will. And we've got some other technologists that are coming in saying, Hey, is it a platform? Is it an architecture? What's the right model here? So we're going to debate that a little bit today. >> And before we close, I'll just say look at the guests, look at the talk tracks. You're seeing a diversity of startups doing cloud networking, you're seeing big practitioners building their own thing, being builders for business value and business model advantages. And you got companies like VMware, who have been on the wave of virtualization. So the, everyone who's involved in super cloud, they're seeing it, they're on the front lines. They're seeing the trend. They are riding that wave. And they have, they're bringing data to the table. So to me, you look at who's involved and you judge it that way. To me, that's the way I look at this. And because we're making it open, Supercloud is going to continue to be debated. But more importantly, the results are going to come in. The market supports it, the business needs it, tech's there, and will it happen? So I think the builders movement, Dave, is going to be big to watch. And then ultimately how that business transformation kicks in, and I think those are the two variables that I would watch on Supercloud. >> Our mission has always been around free content, giving back to the community. So I really want to thank our sponsors today. We've had a great partnership with VMware, who's not only contributed some financial support, but also great content. Alkira, ChaosSearch, prosimo, all phenomenal, allowing us to achieve our mission of serving our audiences and really trying to give more than we take from. >> Free content, that's our mission. Dave, great to kick it off. Kickin' off Supercloud2 all day, we've got some great programs here. We've got VMware coming up next. We have Victoria Viering, who's been on before. He's got a great vision for cross cloud service. We're getting also a keynote with Kit Colbert, who's going to lay out the fragmentation and the benefits that that solves, from solvent fragmentation and silos, breaking down the silos and bringing multicloud future to the table via Super Cloud. So stay with us. We'll be right back after this short break. (bright upbeat music) (music fades)

Published Date : Feb 17 2023

SUMMARY :

and illustrate some of the examples We're going to hear from Walmart today. And that's going to be a determinant. At the same time, you And so I think this is going to the meat on the bone, if you will. Dave, is going to be big to watch. giving back to the community. and the benefits that that solves,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

Pat GelsingerPERSON

0.99+

AlibabaORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

WalmartORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Andy JassyPERSON

0.99+

GoogleORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

AugustDATE

0.99+

Victoria VieringPERSON

0.99+

August 9thDATE

0.99+

John FurrierPERSON

0.99+

200 locationsQUANTITY

0.99+

VMwareORGANIZATION

0.99+

SupercloudORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Supercloud2EVENT

0.99+

two forcesQUANTITY

0.99+

last AugustDATE

0.99+

yesterdayDATE

0.99+

firstQUANTITY

0.99+

two variablesQUANTITY

0.99+

todayDATE

0.98+

OneQUANTITY

0.98+

supercloudORGANIZATION

0.98+

AzureORGANIZATION

0.97+

ChaosSearchORGANIZATION

0.95+

super cloud waveEVENT

0.94+

Supercloud1EVENT

0.94+

Super CloudTITLE

0.93+

AlkiraPERSON

0.83+

Palo Alto, JohnLOCATION

0.83+

this next yearDATE

0.81+

Data MeshORGANIZATION

0.8+

supercloud waveEVENT

0.79+

wave ofEVENT

0.79+

Western UnionLOCATION

0.78+

SaksORGANIZATION

0.76+

GPTORGANIZATION

0.73+

Supercloud2ORGANIZATION

0.72+

Cloud NativeTITLE

0.69+

SupercloudTITLE

0.67+

Supercloud2COMMERCIAL_ITEM

0.66+

multicloudORGANIZATION

0.57+

SupercloudCOMMERCIAL_ITEM

0.53+

Supercloud2TITLE

0.53+

theCUBEORGANIZATION

0.51+

super cloudTITLE

0.51+

CloudTITLE

0.41+

Is Data Mesh the Killer App for Supercloud | Supercloud2


 

(gentle bright music) >> Okay, welcome back to our "Supercloud 2" event live coverage here at stage performance in Palo Alto syndicating around the world. I'm John Furrier with Dave Vellante. We've got exclusive news and a scoop here for SiliconANGLE and theCUBE. Zhamak Dehghani, creator of data mesh has formed a new company called NextData.com NextData, she's a cube alumni and contributor to our Supercloud initiative, as well as our coverage and breaking analysis with Dave Vellante on data, the killer app for Supercloud. Zhamak, great to see you. Thank you for coming into the studio and congratulations on your newly formed venture and continued success on the data mesh. >> Thank you so much. It's great to be here. Great to see you in person. >> Dave: Yeah, finally. >> John: Wonderful. Your contributions to the data conversation has been well-documented certainly by us and others in the industry. Data mesh taking the world by storm. Some people are debating it, throwing, you know, cold water on it. Some are, I think, it's the next big thing. Tell us about the data mesh super data apps that are emerging out of cloud. >> I mean, data mesh, as you said, it's, you know, the pain point that it surfaced were universal. Everybody said, "Oh, why didn't I think of that?" You know, it was just an obvious next step and people are approaching it, implementing it. I guess the last few years, I've been involved in many of those implementations, and I guess Supercloud is somewhat a prerequisite for it because it's data mesh and building applications using data mesh is about sharing data responsibly across boundaries. And those boundaries include boundaries, organizational boundaries cloud technology boundaries and trust boundaries. >> I want to bring that up because your venture, NextData which is new, just formed. Tell us about that. What wave is that riding? What specifically are you targeting? What's the pain point? >> Zhamak: Absolutely, yes. So next data is the result of, I suppose, the pains that I suffered from implementing a database for many of the organizations. Basically, a lot of organizations that I've worked with, they want decentralized data. So they really embrace this idea of decentralized ownership of the data, but yet they want interconnectivity through standard APIs, yet they want discoverability and governance. So they want to have policies implemented, they want to govern that data, they want to be able to discover that data and yet they want to decentralize it. And we do that with a developer experience that is easy and native to a generalist developer. So we try to find, I guess, the common denominator that solves those problems and enables that developer experience for data sharing. >> John: Since you just announced the news, what's been the reaction? >> Zhamak: I just announced the news right now, so what's the reaction? >> John: But people in the industry that know you, you did a lot of work in the area. What have been some of the feedback on the new venture in terms of the approach, the customers, problem? >> Yeah, so we've been in stealth modes, so we haven't publicly talked about it, but folks that have been close to us in fact have reached out. We already have implementations of our pilot platform with early customers, which is super exciting. And we're going to have multiple of those. Of course, we're a tiny, tiny company. We can have many of those where we are going to have multiple pilots, implementations of our platform in real world. We're real global large scale organizations that have real world problems. So we're not going to build our platform in vacuum. And that's what's happening right now. >> Zhamak: When I think about your role at ThoughtWorks, you had a very wide observation space with a number of clients helping them implement data mesh and other things as well prior to your data mesh initiative. But when I look at data mesh, at least the ones that I've seen, they're very narrow. I think of JPMC, I think of HelloFresh. They're generally obviously not surprising. They don't include the big vision of inclusivity across clouds across different data stores. But it seems like people are having to go through some gymnastics to get to, you know, the organizational reality of decentralizing data, and at least pushing data ownership to the line of business. How are you approaching or are you approaching, solving that problem? Are you taking a narrow slice? What can you tell us about Next Data? >> Zhamak: Sure, yeah, absolutely. Gymnastics, the cute word to describe what the organizations have to go through. And one of those problems is that, you know, the data, as you know, resides on different platforms. It's owned by different people, it's processed by pipelines that who owns them. So there's this very disparate and disconnected set of technologies that were very useful for when we thought about data and processing as a centralized problem. But when you think about data as a decentralized problem, the cost of integration of these technologies in a cohesive developer experience is what's missing. And we want to focus on that cohesive end-to-end developer experience to share data responsibly in this autonomous units, we call them data products, I guess in data mesh, right? That constitutes computation, that governs that data policies, discoverability. So I guess, I heard this expression in the last talks that you can have your cake and eat it too. So we want people have their cakes, which is, you know, data in different places, decentralization and eat it too, which is interconnected access to it. So we start with standardizing and codifying this idea of a data product container that encapsulates data computation, APIs to get to it in a technology agnostic way, in an open way. And then, sit on top and use existing existing tech, you know, Snowflake, Databricks, whatever exists, you know, the millions of dollars of investments that companies have made, sit on top of those but create this cohesive, integrated experience where data product is a first class primitive. And that's really key here, that the language, and the modeling that we use is really native to data mesh is that I will make a data product, I'm sharing a data product, and that encapsulates on providing metadata about this. I'm providing computation that's constantly changing the data. I'm providing the API for that. So we're trying to kind of codify and create a new developer experience based on that. And developer, both from provider side and user side connected to peer-to-peer data sharing with data product as a primitive first class concept. >> Okay, so the idea would be developers would build applications leveraging those data products which are discoverable and governed. Now, today you see some companies, you know, take a snowflake for example. >> Zhamak: Yeah. >> Attempting to do that within their own little walled garden. They even, at one point, used the term, "Mesh." I dunno if they pull back on that. And then they sort of became aware of some of your work. But a lot of the things that they're doing within their little insulated environment, you know, support that, that, you know, governance, they're building out an ecosystem. What's different in your vision? >> Exactly. So we realize that, you know, and this is a reality, like you go to organizations, they have a snowflake and half of the organization happily operates on Snowflake. And on the other half, oh, we are on, you know, bare infrastructure on AWS, or we are on Databricks. This is the realities, you know, this Supercloud that's written up here. It's about working across boundaries of technology. So we try to embrace that. And even for our own technology with the way we're building it, we say, "Okay, nobody's going to use next data mesh operating system. People will have different platforms." So you have to build with openness in mind, and in case of Snowflake, I think, you know, they have I'm sure very happy customers as long as customers can be on Snowflake. But once you cross that boundary of platforms then that becomes a problem. And we try to keep that in mind in our solution. >> So, it's worth reviewing that basically, the concept of data mesh is that, whether you're a data lake or a data warehouse, an S3 bucket, an Oracle database as well, they should be inclusive inside of the data. >> We did a session with AWS on the startup showcase, data as code. And remember, I wrote a blog post in 2007 called, "Data's the new developer kit." Back then, they used to call 'em developer kits, if you remember. And that we said at that time, whoever can code data >> Zhamak: Yes. >> Will have a competitive advantage. >> Aren't there machines going to be doing that? Didn't we just hear that? >> Well we have, and you know, Hey Siri, hey Cube. Find me that best video for data mesh. There it is. I mean, this is the point, like what's happening is that, now, data has to be addressable >> Zhamak: Yes. >> For machines and for coding. >> Zhamak: Yes. >> Because as you need to call the data. So the question is, how do you manage the complexity of big things as promiscuous as possible, making it available as well as then governing it because it's a trade off. The more you make open >> Zhamak: Definitely. >> The better the machine learning. >> Zhamak: Yes. >> But yet, the governance issue, so this is the, you need an OS to handle this maybe. >> Yes, well, we call our mental model for our platform is an OS operating system. Operating systems, you know, have shown us how you can kind of abstract what's complex and take care of, you know, a lot of complexities, but yet provide an open and, you know, dynamic enough interface. So we think about it that way. We try to solve the problem of policies live with the data. An enforcement of the policies happens at the most granular level which is, in this concept, the data product. And that would happen whether you read, write, or access a data product. But we can never imagine what are these policies could be. So our thinking is, okay, we should have a open policy framework that can allow organizations write their own policy drivers, and policy definitions, and encode it and encapsulated in this data product container. But I'm not going to fool myself to say that, you know, that's going to solve the problem that you just described. I think we are in this, I don't know, if I look into my crystal ball, what I think might happen is that right now, the primitives that we work with to train machine-learning model are still bits and bites in data. They're fields, rows, columns, right? And that creates quite a large surface area, an attack area for, you know, for privacy of the data. So perhaps, one of the trends that we might see is this evolution of data APIs to become more and more computational aware to bring the compute to the data to reduce that surface area so you can really leave the control of the data to the sovereign owners of that data, right? So that data product. So I think the evolution of our data APIs perhaps will become more and more computational. So you describe what you want, and the data owner decides, you know, how to manage the- >> John: That's interesting, Dave, 'cause it's almost like we just talked about ChatGPT in the last segment with you, who's a machine learning, could really been around the industry. It's almost as if you're starting to see reason come into the data, reasoning. It's like you starting to see not just metadata, using the data to reason so that you don't have to expose the raw data. It's almost like a, I won't say curation layer, but an intelligence layer. >> Zhamak: Exactly. >> Can you share your vision on that 'cause that seems to be where the dots are connecting. >> Zhamak: Yes, this is perhaps further into the future because just from where we stand, we have to create still that bridge of familiarity between that future and present. So we are still in that bridge-making mode, however, by just the basic notion of saying, "I'm going to put an API in front of my data, and that API today might be as primitive as a level of indirection as in you tell me what you want, tell me who you are, let me go process that, all the policies and lineage, and insert all of this intelligence that need to happen. And then I will, today, I will still give you a file. But by just defining that API and standardizing it, now we have this amazing extension point that we can say, "Well, the next revision of this API, you not just tell me who you are, but you actually tell me what intelligence you're after. What's a logic that I need to go and now compute on your API?" And you can kind of evolve that, right? Now you have a point of evolution to this very futuristic, I guess, future where you just describe the question that you're asking from the chat. >> Well, this is the Supercloud, Dave. >> I have a question from a fan, I got to get it in. It's George Gilbert. And so, his question is, you're blowing away the way we synchronize data from operational systems to the data stack to applications. So the concern that he has, and he wants your feedback on this, "Is the data product app devs get exposed to more complexity with respect to moving data between data products or maybe it's attributes between data products, how do you respond to that? How do you see, is that a problem or is that something that is overstated, or do you have an answer for that?" >> Zhamak: Absolutely. So I think there's a sweet spot in getting data developers, data product developers closer to the app, but yet not burdening them with the complexity of the application and application logic, and yet reducing their cognitive load by localizing what they need to know about which is that domain where they're operating within. Because what's happening right now? what's happening right now is that data engineers, a ton of empathy for them for their high threshold of pain that they can, you know, deal with, they have been centralized, they've put into the data team, and they have been given this unbelievable task of make meaning out of data, put semantic over it, curates it, cleans it, and so on. So what we are saying is that get those folks embedded into the domain closer to the application developers, these are still separately moving units. Your app and your data products are independent but yet tightly closed with each other, tightly coupled with each other based on the context of the domain, so reduce cognitive load by localizing what they need to know about to the domain, get them closer to the application but yet have them them separate from app because app provides a very different service. Transactional data for my e-commerce transaction, data product provides a very different service, longitudinal data for the, you know, variety of this intelligent analysis that I can do on the data. But yet, it's all within the domain of e-commerce or sales or whatnot. >> So a lot of decoupling and coupling create that cohesiveness. >> Zhamak: Absolutely. >> Architecture. So I have to ask you, this is an interesting question 'cause it came up on theCUBE all last year. Back on the old server, data center days and cloud, SRE, Google coined the term, "Site Reliability Engineer" for someone to look over the hundreds of thousands of servers. We asked a question to data engineering community who have been suffering, by the way, agree. Is there an SRE-like role for data? Because in a way, data engineering, that platform engineer, they are like the SRE for data. In other words, managing the large scale to enable automation and cell service. What's your thoughts and reaction to that? >> Zhamak: Yes, exactly. So, maybe we go through that history of how SRE came to be. So we had the first DevOps movement which was, remove the wall between dev and ops and bring them together. So you have one cross-functional units of the organization that's responsible for, you build it you run it, right? So then there is no, I'm going to just shoot my application over the wall for somebody else to manage it. So we did that, and then we said, "Okay, as we decentralized and had this many microservices running around, we had to create a layer that abstracted a lot of the complexity around running now a lot or monitoring, observing and running a lot while giving autonomy to this cross-functional team." And that's where the SRE, a new generation of engineers came to exist. So I think if I just look- >> Hence Borg, hence Kubernetes. >> Hence, hence, exactly. Hence chaos engineering, hence embracing the complexity and messiness, right? And putting engineering discipline to embrace that and yet give a cohesive and high integrity experience of those systems. So I think, if we look at that evolution, perhaps something like that is happening by bringing data and apps closer and make them these domain-oriented data product teams or domain oriented cross-functional teams, full stop, and still have a very advanced maybe at the platform infrastructure level kind of operational team that they're not busy doing two jobs which is taking care of domains and the infrastructure, but they're building infrastructure that is embracing that complexity, interconnectivity of this data process. >> John: So you see similarities. >> Absolutely, but I feel like we're probably in a more early days of that movement. >> So it's a data DevOps kind of thing happening where scales happening. It's good things are happening yet. Eh, a little bit fast and loose with some complexities to clean up. >> Yes, yes. This is a different restructure. As you said we, you know, the job of this industry as a whole on architects is decompose, recompose, decompose, recomposing a new way, and now we're like decomposing centralized team, recomposing them as domains and- >> John: So is data mesh the killer app for Supercloud? >> You had to do this for me. >> Dave: Sorry, I couldn't- (John and Dave laughing) >> Zhamak: What do you want me to say, Dave? >> John: Yes. >> Zhamak: Yes of course. >> I mean Supercloud, I think it's, really the terminology's Supercloud, Opencloud. But I think, in spirits of it, this embracing of diversity and giving autonomy for people to make decisions for what's right for them and not yet lock them in. I think just embracing that is baked into how data mesh assume the world would work. >> John: Well thank you so much for coming on Supercloud too, really appreciate it. Data has driven this conversation. Your success of data mesh has really opened up the conversation and exposed the slow moving data industry. >> Dave: Been a great catalyst. (John laughs) >> John: That's now going well. We can move faster, so thanks for coming on. >> Thank you for hosting me. It was wonderful. >> Okay, Supercloud 2 live here in Palo Alto. Our stage performance, I'm John Furrier with Dave Vellante. We're back with more after this short break, Stay with us all day for Supercloud 2. (gentle bright music)

Published Date : Feb 17 2023

SUMMARY :

and continued success on the data mesh. Great to see you in person. and others in the industry. I guess the last few years, What's the pain point? a database for many of the organizations. in terms of the approach, but folks that have been close to us to get to, you know, the data, as you know, resides Okay, so the idea would be developers But a lot of the things that they're doing This is the realities, you know, inside of the data. And that we said at that Well we have, and you know, So the question is, how do so this is the, you need and the data owner decides, you know, so that you don't have 'cause that seems to be where of this API, you not So the concern that he has, into the domain closer to So a lot of decoupling So I have to ask you, this a lot of the complexity of domains and the infrastructure, in a more early days of that movement. to clean up. the job of this industry the world would work. John: Well thank you so much for coming Dave: Been a great catalyst. We can move faster, so Thank you for hosting me. after this short break,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JohnPERSON

0.99+

ZhamakPERSON

0.99+

DavePERSON

0.99+

George GilbertPERSON

0.99+

AWSORGANIZATION

0.99+

2007DATE

0.99+

Palo AltoLOCATION

0.99+

John FurrierPERSON

0.99+

John FurrierPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

JPMCORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

DavPERSON

0.99+

two jobsQUANTITY

0.99+

SupercloudORGANIZATION

0.99+

NextDataORGANIZATION

0.99+

todayDATE

0.99+

OpencloudORGANIZATION

0.99+

last yearDATE

0.99+

SiriTITLE

0.99+

ThoughtWorksORGANIZATION

0.98+

NextData.comORGANIZATION

0.98+

Supercloud 2EVENT

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

HelloFreshORGANIZATION

0.98+

firstQUANTITY

0.98+

millions of dollarsQUANTITY

0.96+

SnowflakeEVENT

0.96+

OracleORGANIZATION

0.96+

SRETITLE

0.94+

SnowflakeORGANIZATION

0.94+

CubePERSON

0.93+

ZhamaPERSON

0.92+

Data Mesh the Killer AppTITLE

0.92+

SiliconANGLEORGANIZATION

0.91+

DatabricksORGANIZATION

0.9+

first classQUANTITY

0.89+

Supercloud 2ORGANIZATION

0.88+

theCUBEORGANIZATION

0.88+

hundreds of thousandsQUANTITY

0.85+

one pointQUANTITY

0.84+

ZhamPERSON

0.83+

SupercloudEVENT

0.83+

ChatGPTORGANIZATION

0.72+

SREORGANIZATION

0.72+

BorgPERSON

0.7+

SnowflakeTITLE

0.66+

SupercloudTITLE

0.65+

halfQUANTITY

0.64+

Is Data Mesh the Next Killer App for Supercloud?


 

(upbeat music) >> Welcome back to our Supercloud 2 event live coverage here of stage performance in Palo Alto syndicating around the world. I'm John Furrier with Dave Vellante. We got exclusive news and a scoop here for SiliconANGLE in theCUBE. Zhamak Dehghani, creator of data mesh has formed a new company called Nextdata.com, Nextdata. She's a cube alumni and contributor to our supercloud initiative, as well as our coverage and Breaking Analysis with Dave Vellante on data, the killer app for supercloud. Zhamak, great to see you. Thank you for coming into the studio and congratulations on your newly formed venture and continued success on the data mesh. >> Thank you so much. It's great to be here. Great to see you in person. >> Dave: Yeah, finally. >> Wonderful. Your contributions to the data conversation has been well documented certainly by us and others in the industry. Data mesh taking the world by storm. Some people are debating it, throwing cold water on it. Some are thinking it's the next big thing. Tell us about the data mesh, super data apps that are emerging out of cloud. >> I mean, data mesh, as you said, the pain point that it surface were universal. Everybody said, "Oh, why didn't I think of that?" It was just an obvious next step and people are approaching it, implementing it. I guess the last few years I've been involved in many of those implementations and I guess supercloud is somewhat a prerequisite for it because it's data mesh and building applications using data mesh is about sharing data responsibly across boundaries. And those boundaries include organizational boundaries, cloud technology boundaries, and trust boundaries. >> I want to bring that up because your venture, Nextdata, which is new just formed. Tell us about that. What wave is that riding? What specifically are you targeting? What's the pain point? >> Absolutely. Yes, so Nextdata is the result of, I suppose the pains that I suffered from implementing data mesh for many of the organizations. Basically a lot of organizations that I've worked with they want decentralized data. So they really embrace this idea of decentralized ownership of the data, but yet they want interconnectivity through standard APIs, yet they want discoverability and governance. So they want to have policies implemented, they want to govern that data, they want to be able to discover that data, and yet they want to decentralize it. And we do that with a developer experience that is easy and native to a generalist developer. So we try to find the, I guess the common denominator that solves those problems and enables that developer experience for data sharing. >> Since you just announced the news, what's been the reaction? >> I just announced the news right now, so what's the reaction? >> But people in the industry know you did a lot of work in the area. What have been some of the feedback on the new venture in terms of the approach, the customers, problem? >> Yeah, so we've been in stealth mode so we haven't publicly talked about it, but folks that have been close to us, in fact have reached that we already have implementations of our pilot platform with early customers, which is super exciting. And we going to have multiple of those. Of course, we're a tiny, tiny company. We can have many of those, but we are going to have multiple pilot implementations of our platform in real world where real global large scale organizations that have real world problems. So we're not going to build our platform in vacuum. And that's what's happening right now. >> Zhamak, when I think about your role at ThoughtWorks, you had a very wide observation space with a number of clients, helping them implement data mesh and other things as well prior to your data mesh initiative. But when I look at data mesh, at least the ones that I've seen, they're very narrow. I think of JPMC, I think of HelloFresh. They're generally, obviously not surprising, they don't include the big vision of inclusivity across clouds, across different data storage. But it seems like people are having to go through some gymnastics to get to the organizational reality of decentralizing data and at least pushing data ownership to the line of business. How are you approaching, or are you approaching solving that problem? Are you taking a narrow slice? What can you tell us about Nextdata? >> Yeah, absolutely. Gymnastics, the cute word to describe what the organizations have to go through. And one of those problems is that the data as you know resides on different platforms, it's owned by different people, is processed by pipelines that who knows who owns them. So there's this very disparate and disconnected set of technologies that were very useful for when we thought about data and processing as a centralized problem. But when you think about data as a decentralized problem the cost of integration of these technologies in a cohesive developer experience is what's missing. And we want to focus on that cohesive end-to-end developer experience to share data responsibly in these autonomous units. We call them data products, I guess in data mesh. That constitutes computation. That governs that data policies, discoverability. So I guess, I heard this expression in the last talks that you can have your cake and eat it too. So we want people have their cakes, which is data in different places, decentralization, and eat it too, which is interconnected access to it. So we start with standardizing and codifying this idea of a data product container that encapsulates data computation APIs to get to it in a technology agnostic way, in an open way. And then sit on top and use existing tech, Snowflake, Databricks, whatever exists, the millions of dollars of investments that companies have made, sit on top of those but create this cohesive, integrated experience where data product is a first class primitive. And that's really key here. The language and the modeling that we use is really native to data mesh, which is that I'm building a data product I'm sharing a data product, and that encapsulates I'm providing metadata about this. I'm providing computation that's constantly changing the data. I'm providing the API for that. So we we're trying to kind of codify and create a new developer experience based on that. And developer, both from provider side and user side, connected to peer-to-peer data sharing with data product as a primitive first class concept. >> So the idea would be developers would build applications leveraging those data products, which are discoverable and governed. Now today you see some companies, take a Snowflake for example, attempting to do that within their own little walled garden. They even at one point used the term mesh. I don't know if they pull back on that. And then they became aware of some of your work. But a lot of the things that they're doing within their little insulated environment support that governance, they're building out an ecosystem. What's different in your vision? >> Exactly. So we realized that, and this is a reality, like you go to organizations, they have a Snowflake and half of the organization happily operates on Snowflake. And on the other half, "oh, we are on Bare infrastructure on AWS or we are on Databricks." This is the reality. This supercloud that's written up here, it's about working across boundaries of technology. So we try to embrace that. And even for our own technology with the way we're building it, we say, "Okay, nobody's going to use Nextdata, data mesh operating system. People will have different platforms." So you have to build with openness in mind and in case of Snowflake, I think, they have very, I'm sure very happy customers as long as customers can be on Snowflake. But once you cross that boundary of platforms then that becomes a problem. And we try to keep that in mind in our solution. >> So it's worth reviewing that basically the concept of data mesh is that whether you're a data lake or a data warehouse, an S3 bucket, an Oracle database as well, they should be inclusive inside of the data. >> We did a session with AWS on the startup showcase, data as code. And remember I wrote a blog post in 2007 called "Data as the New Developer Kit" back then we used to call them developer kits if you remember. And that we said at that time, whoever can code data will have a competitive advantage. >> Aren't the machines going to be doing that? Didn't we just hear that? >> Well, we have. Hey, Siri. Hey, Cube, find me that best video for data mesh. There it is. But this is the point, like what's happening is that now data has to be addressable. for machines and for coding because as you need to call the data. So the question is how do you manage the complexity of big things as promiscuous as possible, making it available, as well as then governing it? Because it's a trade off. The more you make open, the better the machine learning. But yet the governance issue, so this is the, you need an OS to handle this maybe. >> Yes. So yes, well we call, our mental model for our platform is an OS operating system. Operating systems have shown us how you can abstract what's complex and take care of a lot of complexities, but yet provide an open and dynamic enough interface. So we think about it that way. Just, we try to solve the problem of policies live with the data, an enforcement of the policies happens at the most granular level, which is in this concept of the data product. And that would happen whether you read, write or access a data product. But we can never imagine what are these policies could be. So our thinking is we should have a policy, open policy framework that can allow organizations write their own policy drivers and policy definitions and encode it and encapsulated in this data product container. But I'm not going to fool myself to say that, that's going to solve the problem that you just described. I think we are in this, I don't know, if I look into my crystal ball, what I think might happen is that right now the primitives that we work with to train machine learning model are still bits and bytes and data. They're fields, rows, columns and that creates quite a large surface area and attack area for privacy of the data. So perhaps one of the trends that we might see is this evolution of data APIs to become more and more computational aware to bring the compute to the data to reduce that surface area. So you can really leave the control of the data to the sovereign owners of that data. So that data product. So I think that evolution of our data APIs perhaps will become more and more computational. So you describe what you want and the data owner decides how to manage. >> That's interesting, Dave, 'cause it's almost like we just talked about ChatGPT in the last segment we had with you. It was a machine learning have been around the industry. It's almost as if you're starting to see reason come into, the data reasoning is like starting to see not just metadata. Using the data to reason so that you don't have to expose the raw data. So almost like a, I won't say curation layer, but an intelligence layer. >> Zhamak: Exactly. >> Can you share your vision on that? 'Cause that seems to be where the dots are connecting. >> Yes, perhaps further into the future because just from where we stand, we have to create still that bridge of familiarity between that future and present. So we are still in that bridge making mode. However, by just the basic notion of saying, "I'm going to put an API in front of my data." And that API today might be as primitive as a level of indirection, as in you tell me what you want, tell me who you are, let me go process that, all the policies and lineage and insert all of this intelligence that need to happen. And then today, I will still give you a file. But by just defining that API and standardizing it now we have this amazing extension point that we can say, "Well, the next revision of this API, you not just tell me who you are, but you actually tell me what intelligence you're after. What's a logic that I need to go and now compute on your API?" And you can evolve that. Now you have a point of evolution to this very futuristic, I guess, future where you just described the question that you're asking from the ChatGPT. >> Well, this is the supercloud, go ahead, Dave. >> I have a question from a fan, I got to get it in. It's George Gilbert. And so his question is, you're blowing away the way we synchronize data from operational systems to the data stack to applications. So the concern that he has and he wants your feedback on this, is the data product app devs get exposed to more complexity with respect to moving data between data products or maybe it's attributes between data products? How do you respond to that? How do you see? Is that a problem? Is that something that is overstated or do you have an answer for that? >> Absolutely. So I think there's a sweet spot in getting data developers, data product developers closer to the app, but yet not overburdening them with the complexity of the application and application logic and yet reducing their cognitive load by localizing what they need to know about, which is that domain where they're operating within. Because what's happening right now? What's happening right now is that data engineers with, a ton of empathy for them for their high threshold of pain that they can deal with, they have been centralized, they've put into the data team, and they have been given this unbelievable task of make meaning out of data, put semantic over it, curate it, cleans it, and so on. So what we are saying is that get those folks embedded into the domain closer to the application developers. These are still separately moving units. Your app and your data products are independent, but yet tightly closed with each other, tightly coupled with each other based on the context of the domain. So reduce cognitive load by localizing what they need to know about to the domain, get them closer to the application, but yet have them separate from app because app provides a very different service. Transactional data for my e-commerce transaction. Data product provides a very different service. Longitudinal data for the variety of this intelligent analysis that I can do on the data. But yet it's all within the domain of e-commerce or sales or whatnot. >> It's a lot of decoupling and coupling create that cohesiveness architecture. So I have to ask you, this is an interesting question 'cause it came up on theCUBE all last year. Back on the old server data center days and cloud, SRE, Google coined the term, site reliability engineer, for someone to look over the hundreds of thousands of servers. We asked the question to data engineering community who have been suffering, by the way, I agree. Is there an SRE like role for data? Because in a way data engineering, that platform engineer, they are like the SRE for data. In other words managing the large scale to enable automation and cell service. What's your thoughts and reaction to that? >> Yes, exactly. So maybe we go through that history of how SRE came to be. So we had the first DevOps movement, which was remove the wall between dev and ops and bring them together. So you have one unit of one cross-functional units of the organization that's responsible for you build it, you run it. So then there is no, I'm going to just shoot my application over the wall for somebody else to manage it. So we did that and then we said, okay, there is a ton, as we decentralized and had these many microservices running around, we had to create a layer that abstracted a lot of the complexity around running now a lot or monitoring, observing, and running a lot while giving autonomy to this cross-functional team. And that's where the SRE, a new generation of engineers came to exist. So I think if I just look at. >> Hence, Kubernetes. >> Hence, hence, exactly. Hence, chaos engineering. Hence, embracing the complexity and messiness. And putting engineering discipline to embrace that and yet give a cohesive and high integrity experience of those systems. So I think if we look at that evolution, perhaps something like that is happening by bringing data and apps closer and make them these domain-oriented data product teams or domain-oriented cross-functional teams full stop and still have a very advanced maybe at the platform level, infrastructure level operational team that they're not busy doing two jobs, which is taking care of domains and the infrastructure, but they're building infrastructure that is embracing that complexity, interconnectivity of this data process. >> So you see similarities? >> I see, absolutely. But I feel like we're probably in a more early days of that movement. >> So it's a data DevOps kind of thing happening where scales happening. It's good things are happening, yet a little bit fast and loose with some complexities to clean up. >> Yes. This is a different restructure. As you said, the job of this industry as a whole, an architect, is decompose recompose, decompose recompose in new way and now we're like decomposing centralized team, recomposing them as domains. >> So is data mesh the killer app for supercloud? >> You had to do this to me. >> Sorry, I couldn't resist. >> I know. Of course you want me to say this. >> Yes. >> Yes, of course. I mean, supercloud, I think it's really, the terminology supercloud, open cloud, but I think in spirits of it this embracing of diversity and giving autonomy for people to make decisions for what's right for them and not yet lock them in. I think just embracing that is baked into how data mesh assume the world would work. >> Well, thank you so much for coming on Supercloud 2. We really appreciate it. Data has driven this conversation. Your success of data mesh has really opened up the conversation and exposed the slow moving data industry. >> Dave: Been a great catalyst. >> That's now going well. We can move faster. So thanks for coming on. >> Thank you for hosting me. It was wonderful. >> Supercloud 2 live here in Palo Alto, our stage performance. I'm John Furrier with Dave Vellante. We'll back with more after this short break. Stay with us all day for Supercloud 2. (upbeat music)

Published Date : Jan 25 2023

SUMMARY :

and continued success on the data mesh. Great to see you in person. and others in the industry. I guess the last few What's the pain point? for many of the organizations. But people in the industry know you did but folks that have been close to us, at least the ones that I've is that the data as you know But a lot of the things that they're doing and half of the organization that basically the concept of data mesh And that we said at that time, is that now data has to be addressable. and the data owner decides how to manage. the data reasoning is like starting to see 'Cause that seems to be where What's a logic that I need to go Well, this is the So the concern that he has into the domain closer to We asked the question to of the organization that's responsible So I think if we look at that evolution, in a more early days of that movement. So it's a data DevOps As you said, the job of Of course you want me to say this. assume the world would work. the conversation and exposed So thanks for coming on. Thank you for hosting me. I'm John Furrier with Dave Vellante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

2007DATE

0.99+

George GilbertPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

NextdataORGANIZATION

0.99+

ZhamakPERSON

0.99+

Palo AltoLOCATION

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

oneQUANTITY

0.99+

Nextdata.comORGANIZATION

0.99+

two jobsQUANTITY

0.99+

JPMCORGANIZATION

0.99+

todayDATE

0.99+

HelloFreshORGANIZATION

0.99+

ThoughtWorksORGANIZATION

0.99+

last yearDATE

0.99+

Supercloud 2EVENT

0.99+

OracleORGANIZATION

0.98+

firstQUANTITY

0.98+

SiriTITLE

0.98+

CubePERSON

0.98+

DatabricksORGANIZATION

0.98+

SnowflakeORGANIZATION

0.97+

SupercloudORGANIZATION

0.97+

bothQUANTITY

0.97+

one unitQUANTITY

0.97+

SnowflakeTITLE

0.96+

SRETITLE

0.95+

millions of dollarsQUANTITY

0.94+

first classQUANTITY

0.94+

hundreds of thousands of serversQUANTITY

0.92+

supercloudORGANIZATION

0.92+

one pointQUANTITY

0.92+

Supercloud 2TITLE

0.89+

ChatGPTORGANIZATION

0.81+

halfQUANTITY

0.81+

Data Mesh the Next Killer AppTITLE

0.78+

supercloudTITLE

0.75+

a tonQUANTITY

0.73+

Supercloud 2ORGANIZATION

0.72+

SiliconANGLEORGANIZATION

0.7+

DevOpsTITLE

0.66+

SnowflakeEVENT

0.59+

S3TITLE

0.54+

lastDATE

0.54+

supercloudEVENT

0.48+

KubernetesTITLE

0.47+

Breaking Analysis: Supercloud2 Explores Cloud Practitioner Realities & the Future of Data Apps


 

>> Narrator: From theCUBE Studios in Palo Alto and Boston bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante >> Enterprise tech practitioners, like most of us they want to make their lives easier so they can focus on delivering more value to their businesses. And to do so, they want to tap best of breed services in the public cloud, but at the same time connect their on-prem intellectual property to emerging applications which drive top line revenue and bottom line profits. But creating a consistent experience across clouds and on-prem estates has been an elusive capability for most organizations, forcing trade-offs and injecting friction into the system. The need to create seamless experiences is clear and the technology industry is starting to respond with platforms, architectures, and visions of what we've called the Supercloud. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis we give you a preview of Supercloud 2, the second event of its kind that we've had on the topic. Yes, folks that's right Supercloud 2 is here. As of this recording, it's just about four days away 33 guests, 21 sessions, combining live discussions and fireside chats from theCUBE's Palo Alto Studio with prerecorded conversations on the future of cloud and data. You can register for free at supercloud.world. And we are super excited about the Supercloud 2 lineup of guests whereas Supercloud 22 in August, was all about refining the definition of Supercloud testing its technical feasibility and understanding various deployment models. Supercloud 2 features practitioners, technologists and analysts discussing what customers need with real-world examples of Supercloud and will expose thinking around a new breed of cross-cloud apps, data apps, if you will that change the way machines and humans interact with each other. Now the example we'd use if you think about applications today, say a CRM system, sales reps, what are they doing? They're entering data into opportunities they're choosing products they're importing contacts, et cetera. And sure the machine can then take all that data and spit out a forecast by rep, by region, by product, et cetera. But today's applications are largely about filling in forms and or codifying processes. In the future, the Supercloud community sees a new breed of applications emerging where data resides on different clouds, in different data storages, databases, Lakehouse, et cetera. And the machine uses AI to inspect the e-commerce system the inventory data, supply chain information and other systems, and puts together a plan without any human intervention whatsoever. Think about a system that orchestrates people, places and things like an Uber for business. So at Supercloud 2, you'll hear about this vision along with some of today's challenges facing practitioners. Zhamak Dehghani, the founder of Data Mesh is a headliner. Kit Colbert also is headlining. He laid out at the first Supercloud an initial architecture for what that's going to look like. That was last August. And he's going to present his most current thinking on the topic. Veronika Durgin of Sachs will be featured and talk about data sharing across clouds and you know what she needs in the future. One of the main highlights of Supercloud 2 is a dive into Walmart's Supercloud. Other featured practitioners include Western Union Ionis Pharmaceuticals, Warner Media. We've got deep, deep technology dives with folks like Bob Muglia, David Flynn Tristan Handy of DBT Labs, Nir Zuk, the founder of Palo Alto Networks focused on security. Thomas Hazel, who's going to talk about a new type of database for Supercloud. It's several analysts including Keith Townsend Maribel Lopez, George Gilbert, Sanjeev Mohan and so many more guests, we don't have time to list them all. They're all up on supercloud.world with a full agenda, so you can check that out. Now let's take a look at some of the things that we're exploring in more detail starting with the Walmart Cloud native platform, they call it WCNP. We definitely see this as a Supercloud and we dig into it with Jack Greenfield. He's the head of architecture at Walmart. Here's a quote from Jack. "WCNP is an implementation of Kubernetes for the Walmart ecosystem. We've taken Kubernetes off the shelf as open source." By the way, they do the same thing with OpenStack. "And we have integrated it with a number of foundational services that provide other aspects of our computational environment. Kubernetes off the shelf doesn't do everything." And so what Walmart chose to do, they took a do-it-yourself approach to build a Supercloud for a variety of reasons that Jack will explain, along with Walmart's so-called triplet architecture connecting on-prem, Azure and GCP. No surprise, there's no Amazon at Walmart for obvious reasons. And what they do is they create a common experience for devs across clouds. Jack is going to talk about how Walmart is evolving its Supercloud in the future. You don't want to miss that. Now, next, let's take a look at how Veronica Durgin of SAKS thinks about data sharing across clouds. Data sharing we think is a potential killer use case for Supercloud. In fact, let's hear it in Veronica's own words. Please play the clip. >> How do we talk to each other? And more importantly, how do we data share? You know, I work with data, you know this is what I do. So if you know I want to get data from a company that's using, say Google, how do we share it in a smooth way where it doesn't have to be this crazy I don't know, SFTP file moving? So that's where I think Supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> Now data mesh is a possible architectural approach that will enable more facile data sharing and the monetization of data products. You'll hear Zhamak Dehghani live in studio talking about what standards are missing to make this vision a reality across the Supercloud. Now one of the other things that we're really excited about is digging deeper into the right approach for Supercloud adoption. And we're going to share a preview of a debate that's going on right now in the community. Bob Muglia, former CEO of Snowflake and Microsoft Exec was kind enough to spend some time looking at the community's supercloud definition and he felt that it needed to be simplified. So in near real time he came up with the following definition that we're showing here. I'll read it. "A Supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." So not only did Bob simplify the initial definition he's stressed that the Supercloud is a platform versus an architecture implying that the platform provider eg Snowflake, VMware, Databricks, Cohesity, et cetera is responsible for determining the architecture. Now interestingly in the shared Google doc that the working group uses to collaborate on the supercloud de definition, Dr. Nelu Mihai who is actually building a Supercloud responded as follows to Bob's assertion "We need to avoid creating many Supercloud platforms with their own architectures. If we do that, then we create other proprietary clouds on top of existing ones. We need to define an architecture of how Supercloud interfaces with all other clouds. What is the information model? What is the execution model and how users will interact with Supercloud?" What does this seemingly nuanced point tell us and why does it matter? Well, history suggests that de facto standards will emerge more quickly to resolve real world practitioner problems and catch on more quickly than consensus-based architectures and standards-based architectures. But in the long run, the ladder may serve customers better. So we'll be exploring this topic in more detail in Supercloud 2, and of course we'd love to hear what you think platform, architecture, both? Now one of the real technical gurus that we'll have in studio at Supercloud two is David Flynn. He's one of the people behind the the movement that enabled enterprise flash adoption, that craze. And he did that with Fusion IO and he is now working on a system to enable read write data access to any user in any application in any data center or on any cloud anywhere. So think of this company as a Supercloud enabler. Allow me to share an excerpt from a conversation David Flore and I had with David Flynn last year. He as well gave a lot of thought to the Supercloud definition and was really helpful with an opinionated point of view. He said something to us that was, we thought relevant. "What is the operating system for a decentralized cloud? The main two functions of an operating system or an operating environment are one the process scheduler and two, the file system. The strongest argument for supercloud is made when you go down to the platform layer and talk about it as an operating environment on which you can run all forms of applications." So a couple of implications here that will be exploring with David Flynn in studio. First we're inferring from his comment that he's in the platform camp where the platform owner is responsible for the architecture and there are obviously trade-offs there and benefits but we'll have to clarify that with him. And second, he's basically saying, you kill the concept the further you move up the stack. So the weak, the further you move the stack the weaker the supercloud argument becomes because it's just becoming SaaS. Now this is something we're going to explore to better understand is thinking on this, but also whether the existing notion of SaaS is changing and whether or not a new breed of Supercloud apps will emerge. Which brings us to this really interesting fellow that George Gilbert and I RIFed with ahead of Supercloud two. Tristan Handy, he's the founder and CEO of DBT Labs and he has a highly opinionated and technical mind. Here's what he said, "One of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse inside of your data lake. These are core concepts that the business should be able to create applications around very easily. In fact, that's not the case because it involves a lot of data engineering pipeline and other work to make these available. So if you really want to make it easy to create these data experiences for users you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes and they don't need to." A lot of implications to this statement that will explore at Supercloud two versus Jamma Dani's data mesh comes into play here with her critique of hyper specialized data pipeline experts with little or no domain knowledge. Also the need for simplified self-service infrastructure which Kit Colbert is likely going to touch upon. Veronica Durgin of SAKS and her ideal state for data shearing along with Harveer Singh of Western Union. They got to deal with 200 locations around the world in data privacy issues, data sovereignty how do you share data safely? Same with Nick Taylor of Ionis Pharmaceutical. And not to blow your mind but Thomas Hazel and Bob Muglia deposit that to make data apps a reality across the Supercloud you have to rethink everything. You can't just let in memory databases and caching architectures take care of everything in a brute force manner. Rather you have to get down to really detailed levels even things like how data is laid out on disk, ie flash and think about rewriting applications for the Supercloud and the MLAI era. All of this and more at Supercloud two which wouldn't be complete without some data. So we pinged our friends from ETR Eric Bradley and Darren Bramberm to see if they had any data on Supercloud that we could tap. And so we're going to be analyzing a number of the players as well at Supercloud two. Now, many of you are familiar with this graphic here we show some of the players involved in delivering or enabling Supercloud-like capabilities. On the Y axis is spending momentum and on the horizontal accesses market presence or pervasiveness in the data. So netscore versus what they call overlap or end in the data. And the table insert shows how the dots are plotted now not to steal ETR's thunder but the first point is you really can't have supercloud without the hyperscale cloud platforms which is shown on this graphic. But the exciting aspect of Supercloud is the opportunity to build value on top of that hyperscale infrastructure. Snowflake here continues to show strong spending velocity as those Databricks, Hashi, Rubrik. VMware Tanzu, which we all put under the magnifying glass after the Broadcom announcements, is also showing momentum. Unfortunately due to a scheduling conflict we weren't able to get Red Hat on the program but they're clearly a player here. And we've put Cohesity and Veeam on the chart as well because backup is a likely use case across clouds and on-premises. And now one other call out that we drill down on at Supercloud two is CloudFlare, which actually uses the term supercloud maybe in a different way. They look at Supercloud really as you know, serverless on steroids. And so the data brains at ETR will have more to say on this topic at Supercloud two along with many others. Okay, so why should you attend Supercloud two? What's in it for me kind of thing? So first of all, if you're a practitioner and you want to understand what the possibilities are for doing cross-cloud services for monetizing data how your peers are doing data sharing, how some of your peers are actually building out a Supercloud you're going to get real world input from practitioners. If you're a technologist, you're trying to figure out various ways to solve problems around data, data sharing, cross-cloud service deployment there's going to be a number of deep technology experts that are going to share how they're doing it. We're also going to drill down with Walmart into a practical example of Supercloud with some other examples of how practitioners are dealing with cross-cloud complexity. Some of them, by the way, are kind of thrown up their hands and saying, Hey, we're going mono cloud. And we'll talk about the potential implications and dangers and risks of doing that. And also some of the benefits. You know, there's a question, right? Is Supercloud the same wine new bottle or is it truly something different that can drive substantive business value? So look, go to Supercloud.world it's January 17th at 9:00 AM Pacific. You can register for free and participate directly in the program. Okay, that's a wrap. I want to give a shout out to the Supercloud supporters. VMware has been a great partner as our anchor sponsor Chaos Search Proximo, and Alura as well. For contributing to the effort I want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman is his supporting cast as well. Kristen Martin and Cheryl Knight to help get the word out on social media and at our newsletters. And Rob Ho is our editor-in-chief over at Silicon Angle. Thank you all. Remember, these episodes are all available as podcast. Wherever you listen we really appreciate the support that you've given. We just saw some stats from from Buzz Sprout, we hit the top 25% we're almost at 400,000 downloads last year. So really appreciate your participation. All you got to do is search Breaking Analysis podcast and you'll find those I publish each week on wikibon.com and siliconangle.com. Or if you want to get ahold of me you can email me directly at David.Vellante@siliconangle.com or dm me DVellante or comment on our LinkedIn post. I want you to check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Supercloud two or next time on breaking analysis. (light music)

Published Date : Jan 14 2023

SUMMARY :

with Dave Vellante of the things that we're So if you know I want to get data and on the horizontal

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bob MugliaPERSON

0.99+

Alex MyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

David FlynnPERSON

0.99+

VeronicaPERSON

0.99+

JackPERSON

0.99+

Nelu MihaiPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Thomas HazelPERSON

0.99+

Nick TaylorPERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

Kristen MartinPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Veronica DurginPERSON

0.99+

WalmartORGANIZATION

0.99+

Rob HoPERSON

0.99+

Warner MediaORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Veronika DurginPERSON

0.99+

George GilbertPERSON

0.99+

Ionis PharmaceuticalORGANIZATION

0.99+

George GilbertPERSON

0.99+

Bob MugliaPERSON

0.99+

David FlorePERSON

0.99+

DBT LabsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

BobPERSON

0.99+

Palo AltoLOCATION

0.99+

21 sessionsQUANTITY

0.99+

Darren BrambermPERSON

0.99+

33 guestsQUANTITY

0.99+

Nir ZukPERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

Harveer SinghPERSON

0.99+

Kit ColbertPERSON

0.99+

DatabricksORGANIZATION

0.99+

Sanjeev MohanPERSON

0.99+

Supercloud 2TITLE

0.99+

SnowflakeORGANIZATION

0.99+

last yearDATE

0.99+

Western UnionORGANIZATION

0.99+

CohesityORGANIZATION

0.99+

SupercloudORGANIZATION

0.99+

200 locationsQUANTITY

0.99+

AugustDATE

0.99+

Keith TownsendPERSON

0.99+

Data MeshORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

David.Vellante@siliconangle.comOTHER

0.99+

next weekDATE

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

secondQUANTITY

0.99+

first pointQUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

VMwareORGANIZATION

0.98+

Silicon AngleORGANIZATION

0.98+

ETRORGANIZATION

0.98+

Eric BradleyPERSON

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

SachsORGANIZATION

0.98+

SAKSORGANIZATION

0.98+

SupercloudEVENT

0.98+

last AugustDATE

0.98+

each weekQUANTITY

0.98+

Breaking Analysis: Grading our 2022 Enterprise Technology Predictions


 

>>From the Cube Studios in Palo Alto in Boston, bringing you data-driven insights from the cube and E T R. This is breaking analysis with Dave Valante. >>Making technology predictions in 2022 was tricky business, especially if you were projecting the performance of markets or identifying I P O prospects and making binary forecast on data AI and the macro spending climate and other related topics in enterprise tech 2022, of course was characterized by a seesaw economy where central banks were restructuring their balance sheets. The war on Ukraine fueled inflation supply chains were a mess. And the unintended consequences of of forced march to digital and the acceleration still being sorted out. Hello and welcome to this week's weekly on Cube Insights powered by E T R. In this breaking analysis, we continue our annual tradition of transparently grading last year's enterprise tech predictions. And you may or may not agree with our self grading system, but look, we're gonna give you the data and you can draw your own conclusions and tell you what, tell us what you think. >>All right, let's get right to it. So our first prediction was tech spending increases by 8% in 2022. And as we exited 2021 CIOs, they were optimistic about their digital transformation plans. You know, they rushed to make changes to their business and were eager to sharpen their focus and continue to iterate on their digital business models and plug the holes that they, the, in the learnings that they had. And so we predicted that 8% rise in enterprise tech spending, which looked pretty good until Ukraine and the Fed decided that, you know, had to rush and make up for lost time. We kind of nailed the momentum in the energy sector, but we can't give ourselves too much credit for that layup. And as of October, Gartner had it spending growing at just over 5%. I think it was 5.1%. So we're gonna take a C plus on this one and, and move on. >>Our next prediction was basically kind of a slow ground ball. The second base, if I have to be honest, but we felt it was important to highlight that security would remain front and center as the number one priority for organizations in 2022. As is our tradition, you know, we try to up the degree of difficulty by specifically identifying companies that are gonna benefit from these trends. So we highlighted some possible I P O candidates, which of course didn't pan out. S NQ was on our radar. The company had just had to do another raise and they recently took a valuation hit and it was a down round. They raised 196 million. So good chunk of cash, but, but not the i p O that we had predicted Aqua Securities focus on containers and cloud native. That was a trendy call and we thought maybe an M SS P or multiple managed security service providers like Arctic Wolf would I p o, but no way that was happening in the crummy market. >>Nonetheless, we think these types of companies, they're still faring well as the talent shortage in security remains really acute, particularly in the sort of mid-size and small businesses that often don't have a sock Lacework laid off 20% of its workforce in 2022. And CO C e o Dave Hatfield left the company. So that I p o didn't, didn't happen. It was probably too early for Lacework. Anyway, meanwhile you got Netscope, which we've cited as strong in the E T R data as particularly in the emerging technology survey. And then, you know, I lumia holding its own, you know, we never liked that 7 billion price tag that Okta paid for auth zero, but we loved the TAM expansion strategy to target developers beyond sort of Okta's enterprise strength. But we gotta take some points off of the failure thus far of, of Okta to really nail the integration and the go to market model with azero and build, you know, bring that into the, the, the core Okta. >>So the focus on endpoint security that was a winner in 2022 is CrowdStrike led that charge with others holding their own, not the least of which was Palo Alto Networks as it continued to expand beyond its core network security and firewall business, you know, through acquisition. So overall we're gonna give ourselves an A minus for this relatively easy call, but again, we had some specifics associated with it to make it a little tougher. And of course we're watching ve very closely this this coming year in 2023. The vendor consolidation trend. You know, according to a recent Palo Alto network survey with 1300 SecOps pros on average organizations have more than 30 tools to manage security tools. So this is a logical way to optimize cost consolidating vendors and consolidating redundant vendors. The E T R data shows that's clearly a trend that's on the upswing. >>Now moving on, a big theme of 2020 and 2021 of course was remote work and hybrid work and new ways to work and return to work. So we predicted in 2022 that hybrid work models would become the dominant protocol, which clearly is the case. We predicted that about 33% of the workforce would come back to the office in 2022 in September. The E T R data showed that figure was at 29%, but organizations expected that 32% would be in the office, you know, pretty much full-time by year end. That hasn't quite happened, but we were pretty close with the projection, so we're gonna take an A minus on this one. Now, supply chain disruption was another big theme that we felt would carry through 2022. And sure that sounds like another easy one, but as is our tradition, again we try to put some binary metrics around our predictions to put some meat in the bone, so to speak, and and allow us than you to say, okay, did it come true or not? >>So we had some data that we presented last year and supply chain issues impacting hardware spend. We said at the time, you can see this on the left hand side of this chart, the PC laptop demand would remain above pre covid levels, which would reverse a decade of year on year declines, which I think started in around 2011, 2012. Now, while demand is down this year pretty substantially relative to 2021, I D C has worldwide unit shipments for PCs at just over 300 million for 22. If you go back to 2019 and you're looking at around let's say 260 million units shipped globally, you know, roughly, so, you know, pretty good call there. Definitely much higher than pre covid levels. But so what you might be asking why the B, well, we projected that 30% of customers would replace security appliances with cloud-based services and that more than a third would replace their internal data center server and storage hardware with cloud services like 30 and 40% respectively. >>And we don't have explicit survey data on exactly these metrics, but anecdotally we see this happening in earnest. And we do have some data that we're showing here on cloud adoption from ET R'S October survey where the midpoint of workloads running in the cloud is around 34% and forecast, as you can see, to grow steadily over the next three years. So this, well look, this is not, we understand it's not a one-to-one correlation with our prediction, but it's a pretty good bet that we were right, but we gotta take some points off, we think for the lack of unequivocal proof. Cause again, we always strive to make our predictions in ways that can be measured as accurate or not. Is it binary? Did it happen, did it not? Kind of like an O K R and you know, we strive to provide data as proof and in this case it's a bit fuzzy. >>We have to admit that although we're pretty comfortable that the prediction was accurate. And look, when you make an hard forecast, sometimes you gotta pay the price. All right, next, we said in 2022 that the big four cloud players would generate 167 billion in IS and PaaS revenue combining for 38% market growth. And our current forecasts are shown here with a comparison to our January, 2022 figures. So coming into this year now where we are today, so currently we expect 162 billion in total revenue and a 33% growth rate. Still very healthy, but not on our mark. So we think a w s is gonna miss our predictions by about a billion dollars, not, you know, not bad for an 80 billion company. So they're not gonna hit that expectation though of getting really close to a hundred billion run rate. We thought they'd exit the year, you know, closer to, you know, 25 billion a quarter and we don't think they're gonna get there. >>Look, we pretty much nailed Azure even though our prediction W was was correct about g Google Cloud platform surpassing Alibaba, Alibaba, we way overestimated the performance of both of those companies. So we're gonna give ourselves a C plus here and we think, yeah, you might think it's a little bit harsh, we could argue for a B minus to the professor, but the misses on GCP and Alibaba we think warrant a a self penalty on this one. All right, let's move on to our prediction about Supercloud. We said it becomes a thing in 2022 and we think by many accounts it has, despite the naysayers, we're seeing clear evidence that the concept of a layer of value add that sits above and across clouds is taking shape. And on this slide we showed just some of the pickup in the industry. I mean one of the most interesting is CloudFlare, the biggest supercloud antagonist. >>Charles Fitzgerald even predicted that no vendor would ever use the term in their marketing. And that would be proof if that happened that Supercloud was a thing and he said it would never happen. Well CloudFlare has, and they launched their version of Supercloud at their developer week. Chris Miller of the register put out a Supercloud block diagram, something else that Charles Fitzgerald was, it was was pushing us for, which is rightly so, it was a good call on his part. And Chris Miller actually came up with one that's pretty good at David Linthicum also has produced a a a A block diagram, kind of similar, David uses the term metacloud and he uses the term supercloud kind of interchangeably to describe that trend. And so we we're aligned on that front. Brian Gracely has covered the concept on the popular cloud podcast. Berkeley launched the Sky computing initiative. >>You read through that white paper and many of the concepts highlighted in the Supercloud 3.0 community developed definition align with that. Walmart launched a platform with many of the supercloud salient attributes. So did Goldman Sachs, so did Capital One, so did nasdaq. So you know, sorry you can hate the term, but very clearly the evidence is gathering for the super cloud storm. We're gonna take an a plus on this one. Sorry, haters. Alright, let's talk about data mesh in our 21 predictions posts. We said that in the 2020s, 75% of large organizations are gonna re-architect their big data platforms. So kind of a decade long prediction. We don't like to do that always, but sometimes it's warranted. And because it was a longer term prediction, we, at the time in, in coming into 22 when we were evaluating our 21 predictions, we took a grade of incomplete because the sort of decade long or majority of the decade better part of the decade prediction. >>So last year, earlier this year, we said our number seven prediction was data mesh gains momentum in 22. But it's largely confined and narrow data problems with limited scope as you can see here with some of the key bullets. So there's a lot of discussion in the data community about data mesh and while there are an increasing number of examples, JP Morgan Chase, Intuit, H S P C, HelloFresh, and others that are completely rearchitecting parts of their data platform completely rearchitecting entire data platforms is non-trivial. There are organizational challenges, there're data, data ownership, debates, technical considerations, and in particular two of the four fundamental data mesh principles that the, the need for a self-service infrastructure and federated computational governance are challenging. Look, democratizing data and facilitating data sharing creates conflicts with regulatory requirements around data privacy. As such many organizations are being really selective with their data mesh implementations and hence our prediction of narrowing the scope of data mesh initiatives. >>I think that was right on J P M C is a good example of this, where you got a single group within a, within a division narrowly implementing the data mesh architecture. They're using a w s, they're using data lakes, they're using Amazon Glue, creating a catalog and a variety of other techniques to meet their objectives. They kind of automating data quality and it was pretty well thought out and interesting approach and I think it's gonna be made easier by some of the announcements that Amazon made at the recent, you know, reinvent, particularly trying to eliminate ET t l, better connections between Aurora and Redshift and, and, and better data sharing the data clean room. So a lot of that is gonna help. Of course, snowflake has been on this for a while now. Many other companies are facing, you know, limitations as we said here and this slide with their Hadoop data platforms. They need to do new, some new thinking around that to scale. HelloFresh is a really good example of this. Look, the bottom line is that organizations want to get more value from data and having a centralized, highly specialized teams that own the data problem, it's been a barrier and a blocker to success. The data mesh starts with organizational considerations as described in great detail by Ash Nair of Warner Brothers. So take a listen to this clip. >>Yeah, so when people think of Warner Brothers, you always think of like the movie studio, but we're more than that, right? I mean, you think of H B O, you think of t n t, you think of C N N. We have 30 plus brands in our portfolio and each have their own needs. So the, the idea of a data mesh really helps us because what we can do is we can federate access across the company so that, you know, CNN can work at their own pace. You know, when there's election season, they can ingest their own data and they don't have to, you know, bump up against, as an example, HBO if Game of Thrones is going on. >>So it's often the case that data mesh is in the eyes of the implementer. And while a company's implementation may not strictly adhere to Jamma Dani's vision of data mesh, and that's okay, the goal is to use data more effectively. And despite Gartner's attempts to deposition data mesh in favor of the somewhat confusing or frankly far more confusing data fabric concept that they stole from NetApp data mesh is taking hold in organizations globally today. So we're gonna take a B on this one. The prediction is shaping up the way we envision, but as we previously reported, it's gonna take some time. The better part of a decade in our view, new standards have to emerge to make this vision become reality and they'll come in the form of both open and de facto approaches. Okay, our eighth prediction last year focused on the face off between Snowflake and Databricks. >>And we realized this popular topic, and maybe one that's getting a little overplayed, but these are two companies that initially, you know, looked like they were shaping up as partners and they, by the way, they are still partnering in the field. But you go back a couple years ago, the idea of using an AW w s infrastructure, Databricks machine intelligence and applying that on top of Snowflake as a facile data warehouse, still very viable. But both of these companies, they have much larger ambitions. They got big total available markets to chase and large valuations that they have to justify. So what's happening is, as we've previously reported, each of these companies is moving toward the other firm's core domain and they're building out an ecosystem that'll be critical for their future. So as part of that effort, we said each is gonna become aggressive investors and maybe start doing some m and a and they have in various companies. >>And on this chart that we produced last year, we studied some of the companies that were targets and we've added some recent investments of both Snowflake and Databricks. As you can see, they've both, for example, invested in elation snowflake's, put money into Lacework, the Secur security firm, ThoughtSpot, which is trying to democratize data with ai. Collibra is a governance platform and you can see Databricks investments in data transformation with D B T labs, Matillion doing simplified business intelligence hunters. So that's, you know, they're security investment and so forth. So other than our thought that we'd see Databricks I p o last year, this prediction been pretty spot on. So we'll give ourselves an A on that one. Now observability has been a hot topic and we've been covering it for a while with our friends at E T R, particularly Eric Bradley. Our number nine prediction last year was basically that if you're not cloud native and observability, you are gonna be in big trouble. >>So everything guys gotta go cloud native. And that's clearly been the case. Splunk, the big player in the space has been transitioning to the cloud, hasn't always been pretty, as we reported, Datadog real momentum, the elk stack, that's open source model. You got new entrants that we've cited before, like observe, honeycomb, chaos search and others that we've, we've reported on, they're all born in the cloud. So we're gonna take another a on this one, admittedly, yeah, it's a re reasonably easy call, but you gotta have a few of those in the mix. Okay, our last prediction, our number 10 was around events. Something the cube knows a little bit about. We said that a new category of events would emerge as hybrid and that for the most part is happened. So that's gonna be the mainstay is what we said. That pure play virtual events are gonna give way to hi hybrid. >>And the narrative is that virtual only events are, you know, they're good for quick hits, but lousy replacements for in-person events. And you know that said, organizations of all shapes and sizes, they learn how to create better virtual content and support remote audiences during the pandemic. So when we set at pure play is gonna give way to hybrid, we said we, we i we implied or specific or specified that the physical event that v i p experience is going defined. That overall experience and those v i p events would create a little fomo, fear of, of missing out in a virtual component would overlay that serves an audience 10 x the size of the physical. We saw that really two really good examples. Red Hat Summit in Boston, small event, couple thousand people served tens of thousands, you know, online. Second was Google Cloud next v i p event in, in New York City. >>Everything else was, was, was, was virtual. You know, even examples of our prediction of metaverse like immersion have popped up and, and and, and you know, other companies are doing roadshow as we predicted like a lot of companies are doing it. You're seeing that as a major trend where organizations are going with their sales teams out into the regions and doing a little belly to belly action as opposed to the big giant event. That's a definitely a, a trend that we're seeing. So in reviewing this prediction, the grade we gave ourselves is, you know, maybe a bit unfair, it should be, you could argue for a higher grade, but the, but the organization still haven't figured it out. They have hybrid experiences but they generally do a really poor job of leveraging the afterglow and of event of an event. It still tends to be one and done, let's move on to the next event or the next city. >>Let the sales team pick up the pieces if they were paying attention. So because of that, we're only taking a B plus on this one. Okay, so that's the review of last year's predictions. You know, overall if you average out our grade on the 10 predictions that come out to a b plus, I dunno why we can't seem to get that elusive a, but we're gonna keep trying our friends at E T R and we are starting to look at the data for 2023 from the surveys and all the work that we've done on the cube and our, our analysis and we're gonna put together our predictions. We've had literally hundreds of inbounds from PR pros pitching us. We've got this huge thick folder that we've started to review with our yellow highlighter. And our plan is to review it this month, take a look at all the data, get some ideas from the inbounds and then the e t R of January surveys in the field. >>It's probably got a little over a thousand responses right now. You know, they'll get up to, you know, 1400 or so. And once we've digested all that, we're gonna go back and publish our predictions for 2023 sometime in January. So stay tuned for that. All right, we're gonna leave it there for today. You wanna thank Alex Myerson who's on production and he manages the podcast, Ken Schiffman as well out of our, our Boston studio. I gotta really heartfelt thank you to Kristen Martin and Cheryl Knight and their team. They helped get the word out on social and in our newsletters. Rob Ho is our editor in chief over at Silicon Angle who does some great editing for us. Thank you all. Remember all these podcasts are available or all these episodes are available is podcasts. Wherever you listen, just all you do Search Breaking analysis podcast, really getting some great traction there. Appreciate you guys subscribing. I published each week on wikibon.com, silicon angle.com or you can email me directly at david dot valante silicon angle.com or dm me Dante, or you can comment on my LinkedIn post. And please check out ETR AI for the very best survey data in the enterprise tech business. Some awesome stuff in there. This is Dante for the Cube Insights powered by etr. Thanks for watching and we'll see you next time on breaking analysis.

Published Date : Dec 18 2022

SUMMARY :

From the Cube Studios in Palo Alto in Boston, bringing you data-driven insights from self grading system, but look, we're gonna give you the data and you can draw your own conclusions and tell you what, We kind of nailed the momentum in the energy but not the i p O that we had predicted Aqua Securities focus on And then, you know, I lumia holding its own, you So the focus on endpoint security that was a winner in 2022 is CrowdStrike led that charge put some meat in the bone, so to speak, and and allow us than you to say, okay, We said at the time, you can see this on the left hand side of this chart, the PC laptop demand would remain Kind of like an O K R and you know, we strive to provide data We thought they'd exit the year, you know, closer to, you know, 25 billion a quarter and we don't think they're we think, yeah, you might think it's a little bit harsh, we could argue for a B minus to the professor, Chris Miller of the register put out a Supercloud block diagram, something else that So you know, sorry you can hate the term, but very clearly the evidence is gathering for the super cloud But it's largely confined and narrow data problems with limited scope as you can see here with some of the announcements that Amazon made at the recent, you know, reinvent, particularly trying to the company so that, you know, CNN can work at their own pace. So it's often the case that data mesh is in the eyes of the implementer. but these are two companies that initially, you know, looked like they were shaping up as partners and they, So that's, you know, they're security investment and so forth. So that's gonna be the mainstay is what we And the narrative is that virtual only events are, you know, they're good for quick hits, the grade we gave ourselves is, you know, maybe a bit unfair, it should be, you could argue for a higher grade, You know, overall if you average out our grade on the 10 predictions that come out to a b plus, You know, they'll get up to, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Chris MillerPERSON

0.99+

CNNORGANIZATION

0.99+

Rob HoPERSON

0.99+

AlibabaORGANIZATION

0.99+

Dave ValantePERSON

0.99+

AmazonORGANIZATION

0.99+

5.1%QUANTITY

0.99+

2022DATE

0.99+

Charles FitzgeraldPERSON

0.99+

Dave HatfieldPERSON

0.99+

Brian GracelyPERSON

0.99+

2019DATE

0.99+

LaceworkORGANIZATION

0.99+

twoQUANTITY

0.99+

GCPORGANIZATION

0.99+

33%QUANTITY

0.99+

WalmartORGANIZATION

0.99+

DavidPERSON

0.99+

2021DATE

0.99+

20%QUANTITY

0.99+

Kristen MartinPERSON

0.99+

Palo AltoLOCATION

0.99+

2020DATE

0.99+

Ash NairPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

162 billionQUANTITY

0.99+

New York CityLOCATION

0.99+

DatabricksORGANIZATION

0.99+

OctoberDATE

0.99+

last yearDATE

0.99+

Arctic WolfORGANIZATION

0.99+

two companiesQUANTITY

0.99+

38%QUANTITY

0.99+

SeptemberDATE

0.99+

FedORGANIZATION

0.99+

JP Morgan ChaseORGANIZATION

0.99+

80 billionQUANTITY

0.99+

29%QUANTITY

0.99+

32%QUANTITY

0.99+

21 predictionsQUANTITY

0.99+

30%QUANTITY

0.99+

HBOORGANIZATION

0.99+

75%QUANTITY

0.99+

Game of ThronesTITLE

0.99+

JanuaryDATE

0.99+

2023DATE

0.99+

10 predictionsQUANTITY

0.99+

bothQUANTITY

0.99+

22QUANTITY

0.99+

ThoughtSpotORGANIZATION

0.99+

196 millionQUANTITY

0.99+

30QUANTITY

0.99+

eachQUANTITY

0.99+

last yearDATE

0.99+

Palo Alto NetworksORGANIZATION

0.99+

2020sDATE

0.99+

167 billionQUANTITY

0.99+

OktaORGANIZATION

0.99+

SecondQUANTITY

0.99+

GartnerORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

Aqua SecuritiesORGANIZATION

0.99+

DantePERSON

0.99+

8%QUANTITY

0.99+

Warner BrothersORGANIZATION

0.99+

IntuitORGANIZATION

0.99+

Cube StudiosORGANIZATION

0.99+

each weekQUANTITY

0.99+

7 billionQUANTITY

0.99+

40%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

Felix Van de Maele, Collibra, Data Citizens 22


 

(upbeat techno music) >> Collibra is a company that was founded in 2008 right before the so-called modern big data era kicked into high gear. The company was one of the first to focus its business on data governance. Now, historically, data governance and data quality initiatives, they were back office functions, and they were largely confined to regulated industries that had to comply with public policy mandates. But as the cloud went mainstream the tech giants showed us how valuable data could become, and the value proposition for data quality and trust, it evolved from primarily a compliance driven issue, to becoming a linchpin of competitive advantage. But, data in the decade of the 2010s was largely about getting the technology to work. You had these highly centralized technical teams that were formed and they had hyper-specialized skills, to develop data architectures and processes, to serve the myriad data needs of organizations. And it resulted in a lot of frustration, with data initiatives for most organizations, that didn't have the resources of the cloud guys and the social media giants, to really attack their data problems and turn data into gold. This is why today, for example, there's quite a bit of momentum to re-thinking monolithic data architectures. You see, you hear about initiatives like Data Mesh and the idea of data as a product. They're gaining traction as a way to better serve the the data needs of decentralized business users. You hear a lot about data democratization. So these decentralization efforts around data, they're great, but they create a new set of problems. Specifically, how do you deliver, like a self-service infrastructure to business users and domain experts? Now the cloud is definitely helping with that but also, how do you automate governance? This becomes especially tricky as protecting data privacy has become more and more important. In other words, while it's enticing to experiment, and run fast and loose with data initiatives, kind of like the Wild West, to find new veins of gold, it has to be done responsibly. As such, the idea of data governance has had to evolve to become more automated and intelligent. Governance and data lineage is still fundamental to ensuring trust as data. It moves like water through an organization. No one is going to use data that is entrusted. Metadata has become increasingly important for data discovery and data classification. As data flows through an organization, the continuously ability to check for data flaws and automating that data quality, they become a functional requirement of any modern data management platform. And finally, data privacy has become a critical adjacency to cyber security. So you can see how data governance has evolved into a much richer set of capabilities than it was 10 or 15 years ago. Hello and welcome to theCUBE's coverage of Data Citizens made possible by Collibra, a leader in so-called Data intelligence and the host of Data Citizens 2022, which is taking place in San Diego. My name is Dave Vellante and I'm one of the hosts of our program which is running in parallel to Data Citizens. Now at theCUBE we like to say we extract the signal from the noise, and over the next couple of days we're going to feature some of the themes from the keynote speakers at Data Citizens, and we'll hear from several of the executives. Felix Van de Maele, who is the co-founder and CEO of Collibra, will join us. Along with one of the other founders of Collibra, Stan Christiaens, who's going to join my colleague Lisa Martin. I'm going to also sit down with Laura Sellers, she's the Chief Product Officer at Collibra. We'll talk about some of the the announcements and innovations they're making at the event, and then we'll dig in further to data quality with Kirk Haslbeck. He's the Vice President of Data Quality at Collibra. He's an amazingly smart dude who founded Owl DQ, a company that he sold to Collibra last year. Now, many companies they didn't make it through the Hadoop era, you know they missed the industry waves and they became driftwood. Collibra, on the other hand, has evolved its business, they've leveraged the cloud, expanded its product portfolio and leaned in heavily to some major partnerships with cloud providers as well as receiving a strategic investment from Snowflake, earlier this year. So, it's a really interesting story that we're thrilled to be sharing with you. Thanks for watching and I hope you enjoy the program. (upbeat rock music) Last year theCUBE covered Data Citizens, Collibra's customer event, and the premise that we put forth prior to that event was that despite all the innovation that's gone on over the last decade or more with data, you know starting with the Hadoop movement, we had Data lakes, we had Spark, the ascendancy of programming languages like Python, the introduction of frameworks like Tensorflow, the rise of AI, Low Code, No Code, et cetera. Businesses still find it's too difficult to get more value from their data initiatives, and we said at the time, you know maybe it's time to rethink data innovation. While a lot of the effort has been focused on, you more efficiently storing and processing data, perhaps more energy needs to go into thinking about the people and the process side of the equation. Meaning, making it easier for domain experts to both gain insights from data, trust the data, and begin to use that data in new ways, fueling data products, monetization, and insights. Data Citizens 2022 is back and we're pleased to have Felix Van de Maele who is the founder and CEO of Collibra. He's on theCUBE. We're excited to have you Felix. Good to see you again. >> Likewise Dave. Thanks for having me again. >> You bet. All right, we're going to get the update from Felix on the current data landscape, how he sees it why data intelligence is more important now than ever, and get current on what Collibra has been up to over the past year, and what's changed since Data citizens 2021, and we may even touch on some of the product news. So Felix, we're living in a very different world today with businesses and consumers. They're struggling with things like supply chains, uncertain economic trends and we're not just snapping back to the 2010s, that's clear, and that's really true as well in the world of data. So what's different in your mind, in the data landscape of the 2020s, from the previous decade, and what challenges does that bring for your customers? >> Yeah, absolutely, and and I think you said it well, Dave and the intro that, that rising complexity and fragmentation, in the broader data landscape, that hasn't gotten any better over the last couple of years. When when we talk to our customers, that level of fragmentation, the complexity, how do we find data that we can trust, that we know we can use, has only gotten more more difficult. So that trend that's continuing, I think what is changing is that trend has become much more acute. Well, the other thing we've seen over the last couple of years is that the level of scrutiny that organizations are under, respect to data, as data becomes more mission critical, as data becomes more impactful than important, the level of scrutiny with respect to privacy, security, regulatory compliance, as only increasing as well. Which again, is really difficult in this environment of continuous innovation, continuous change, continuous growing complexity, and fragmentation. So, it's become much more acute. And to your earlier point, we do live in a different world and and the past couple of years we could probably just kind of brute force it, right? We could focus on, on the top line, there was enough kind of investments to be, to be had. I think nowadays organizations are focused or are, are, are are, are, are in a very different environment where there's much more focus on cost control, productivity, efficiency, how do we truly get the value from that data? So again, I think it just another incentive for organization to now truly look at data and to scale with data, not just from a a technology and infrastructure perspective, but how do we actually scale data from an organizational perspective, right? You said at the, the people and process, how do we do that at scale? And that's only, only, only becoming much more important, and we do believe that the, the economic environment that we find ourselves in today is going to be catalyst for organizations to really take that more seriously if, if, if you will, than they maybe have in the have in the past. >> You know, I don't know when you guys founded Collibra, if you had a sense as to how complicated it was going to get, but you've been on a mission to really address these problems from the beginning. How would you describe your, your, your mission and what are you doing to address these challenges? >> Yeah, absolutely. We, we started Collibra in 2008. So, in some sense and the, the last kind of financial crisis and that was really the, the start of Collibra, where we found product market fit, working with large financial institutions to help them cope with the increasing compliance requirements that they were faced with because of the, of the financial crisis. And kind of here we are again, in a very different environment of course 15 years, almost 15 years later, but data only becoming more important. But our mission to deliver trusted data for every user, every use case and across every source, frankly, has only become more important. So, what has been an incredible journey over the last 14, 15 years, I think we're still relatively early in our mission to again, be able to provide everyone, and that's why we call it Data Citizens, we truly believe that everyone in the organization should be able to use trusted data in an easy, easy matter. That mission is is only becoming more important, more relevant. We definitely have a lot more work ahead of us because we still relatively early in that, in that journey. >> Well that's interesting, because you know, in my observation it takes 7 to 10 years to actually build a company, and then the fact that you're still in the early days is kind of interesting. I mean, you, Collibra's had a good 12 months or so since we last spoke at Data Citizens. Give us the latest update on your business. What do people need to know about your current momentum? >> Yeah, absolutely. Again, there's a lot of tailwind organizations that are only maturing their data practices and we've seen that kind of transform or influence a lot of our business growth that we've seen, broader adoption of the platform. We work at some of the largest organizations in the world with its Adobe, Heineken, Bank of America and many more. We have now over 600 enterprise customers, all industry leaders and every single vertical. So it's, it's really exciting to see that and continue to partner with those organizations. On the partnership side, again, a lot of momentum in the org in the, in the market with some of the cloud partners like Google, Amazon, Snowflake, Data Breaks, and and others, right? As those kind of new modern data infrastructures, modern data architectures, are definitely all moving to the cloud. A great opportunity for us, our partners, and of course our customers, to help them kind of transition to the cloud even faster. And so we see a lot of excitement and momentum there. We did an acquisition about 18 months ago around data quality, data observability, which we believe is an enormous opportunity. Of course data quality isn't new but I think there's a lot of reasons why we're so excited about quality and observability now. One, is around leveraging AI machine learning again to drive more automation. And a second is that those data pipelines, that are now being created in the cloud, in these modern data architecture, architectures, they've become mission critical. They've become real time. And so monitoring, observing those data pipelines continuously, has become absolutely critical so that they're really excited about, about that as well. And on the organizational side, I'm sure you've heard the term around kind of data mesh, something that's gaining a lot of momentum, rightfully so. It's really the type of governance that we always believed in. Federated, focused on domains, giving a lot of ownership to different teams. I think that's the way to scale data organizations, and so that aligns really well with our vision and from a product perspective, we've seen a lot of momentum with our customers there as well. >> Yeah, you know, a couple things there. I mean, the acquisition of OwlDQ, you know Kirk Haslbeck and, and their team. It's interesting, you know the whole data quality used to be this back office function and and really confined to highly regulated industries. It's come to the front office, it's top of mind for Chief Data Officers. Data mesh, you mentioned you guys are a connective tissue for all these different nodes on the data mesh. That's key. And of course we see you at all the shows. You're, you're a critical part of many ecosystems and you're developing your own ecosystem. So, let's chat a little bit about the, the products. We're going to go deeper into products later on, at Data Citizens 22, but we know you're debuting some, some new innovations, you know, whether it's, you know, the the under the covers in security, sort of making data more accessible for people, just dealing with workflows and processes, as you talked about earlier. Tell us a little bit about what you're introducing. >> Yeah, absolutely. We we're super excited, a ton of innovation. And if we think about the big theme and like, like I said, we're still relatively early in this, in this journey towards kind of that mission of data intelligence that really bolts and compelling mission. Either customers are still start, are just starting on that, on that journey. We want to make it as easy as possible for the, for organization to actually get started, because we know that's important that they do. And for our organization and customers, that have been with us for some time, there's still a tremendous amount of opportunity to kind of expand the platform further. And again to make it easier for, really to, to accomplish that mission and vision around that Data Citizen, that everyone has access to trustworthy data in a very easy, easy way. So that's really the theme of a lot of the innovation that we're driving, a lot of kind of ease of adoption, ease of use, but also then, how do we make sure that, as clear becomes this kind of mission critical enterprise platform, from a security performance, architecture scale supportability, that we're truly able to deliver that kind of an enterprise mission critical platform. And so that's the big theme. From an innovation perspective, from a product perspective, a lot of new innovation that we're really excited about. A couple of highlights. One, is around data marketplace. Again, a lot of our customers have plans in that direction, How to make it easy? How do we make How do we make available to true kind of shopping experience? So that anybody in the organization can, in a very easy search first way, find the right data product, find the right dataset, that they can then consume. Usage analytics, how do you, how do we help organizations drive adoption? Tell them where they're working really well and where they have opportunities. Homepages again to, to make things easy for, for people, for anyone in your organization, to kind of get started with Collibra. You mentioned Workflow Designer, again, we have a very powerful enterprise platform, one of our key differentiators is the ability to really drive a lot of automation through workflows. And now we provided a, a new Low-Code, No-Code kind of workflow designer experience. So, so really customers can take it to the next level. There's a lot more new product around Collibra protect, which in partnership with Snowflake, which has been a strategic investor in Collibra, focused on how do we make access governance easier? How do we, how do we, how are we able to make sure that as you move to the cloud, things like access management, masking around sensitive data, PIA data, is managed as a much more effective, effective rate. Really excited about that product. There's more around data quality. Again, how do we, how do we get that deployed as easily, and quickly, and widely as we can? Moving that to the cloud has been a big part of our strategy. So, we launch our data quality cloud product, as well as making use of those, those native compute capabilities and platforms, like Snowflake, Databricks, Google, Amazon, and others. And so we are bettering a capability, a capability that we call push down, so we're actually pushing down the computer and data quality, to monitoring into the underlying platform, which again from a scale performance and ease of use perspective, is going to make a massive difference. And then more broadly, we talked a little bit about the ecosystem. Again, integrations, we talk about being able to connect to every source. Integrations are absolutely critical, and we're really excited to deliver new integrations with Snowflake, Azure and Google Cloud storage as well. So that's a lot coming out, the team has been work, at work really hard, and we are really really excited about what we are coming, what we're bringing to market. >> Yeah, a lot going on there. I wonder if you could give us your, your closing thoughts. I mean, you you talked about, you know, the marketplace, you know you think about Data Mesh, you think of data as product, one of the key principles, you think about monetization. This is really different than what we've been used to in data, which is just getting the technology to work has been, been so hard. So, how do you see sort of the future and, you know give us the, your closing thoughts please? >> Yeah, absolutely. And, and I think we we're really at a pivotal moment and I think you said it well. We, we all know the constraint and the challenges with data, how to actually do data at scale. And while we've seen a ton of innovation on the infrastructure side, we fundamentally believe that just getting a faster database is important, but it's not going to fully solve the challenges and truly kind of deliver on the opportunity. And that's why now is really the time to, deliver this data intelligence vision, this data intelligence platform. We are still early, making it as easy as we can, as kind of our, as our mission. And so I'm really, really excited to see what we, what we are going to, how the marks are going to evolve over the next, next few quarters and years. I think the trend is clearly there. We talked about Data Mesh, this kind of federated approach focus on data products, is just another signal that we believe, that a lot of our organization are now at the time, they're understanding need to go beyond just the technology. I really, really think about how to actually scale data as a business function, just like we've done with IT, with HR, with sales and marketing, with finance. That's how we need to think about data. I think now is the time, given the economic environment that we are in, much more focus on control, much more focus on productivity, efficiency, and now is the time we need to look beyond just the technology and infrastructure to think of how to scale data, how to manage data at scale. >> Yeah, it's a new era. The next 10 years of data won't be like the last, as I always say. Felix, thanks so much. Good luck in, in San Diego. I know you're going to crush it out there. >> Thank you Dave. >> Yeah, it's a great spot for an in-person event and and of course the content post-event is going to be available at collibra.com and you can of course catch theCUBE coverage at theCUBE.net and all the news at siliconangle.com. This is Dave Vellante for theCUBE, your leader in enterprise and emerging tech coverage. (upbeat techno music)

Published Date : Nov 2 2022

SUMMARY :

and the premise that we put for having me again. in the data landscape of the 2020s, and to scale with data, and what are you doing to And kind of here we are again, still in the early days a lot of momentum in the org in the, And of course we see you at all the shows. is the ability to the technology to work and now is the time we need to look of data won't be like the and of course the content

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

HeinekenORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

Felix Van de MaelePERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Laura SellersPERSON

0.99+

CollibraORGANIZATION

0.99+

2008DATE

0.99+

FelixPERSON

0.99+

San DiegoLOCATION

0.99+

Stan ChristiaensPERSON

0.99+

DavePERSON

0.99+

Bank of AmericaORGANIZATION

0.99+

7QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

2020sDATE

0.99+

last yearDATE

0.99+

2010sDATE

0.99+

Data BreaksORGANIZATION

0.99+

PythonTITLE

0.99+

Last yearDATE

0.99+

12 monthsQUANTITY

0.99+

siliconangle.comOTHER

0.99+

oneQUANTITY

0.99+

Data CitizensORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

Owl DQORGANIZATION

0.98+

10DATE

0.98+

OwlDQORGANIZATION

0.98+

Kirk HaslbeckPERSON

0.98+

10 yearsQUANTITY

0.98+

OneQUANTITY

0.98+

SparkTITLE

0.98+

todayDATE

0.98+

firstQUANTITY

0.97+

Data CitizensEVENT

0.97+

earlier this yearDATE

0.96+

TensorflowTITLE

0.96+

Data Citizens 22ORGANIZATION

0.95+

bothQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

15 years agoDATE

0.93+

over 600 enterprise customersQUANTITY

0.91+

past couple of yearsDATE

0.91+

about 18 months agoDATE

0.9+

collibra.comOTHER

0.89+

Data citizens 2021ORGANIZATION

0.88+

Data Citizens 2022EVENT

0.86+

almost 15 years laterDATE

0.85+

WestLOCATION

0.85+

AzureTITLE

0.84+

first wayQUANTITY

0.83+

Vice PresidentPERSON

0.83+

last couple of yearsDATE

0.8+

Day 2 Keynote Analysis & Wrap | KubeCon + CloudNativeCon NA 2022


 

>>Set restaurants. And who says TEUs had got a little ass more skin in the game for us, in charge of his destiny? You guys are excited. Robert Worship is Chief Alumni. >>My name is Dave Ante, and I'm a long time industry analyst. So when you're as old as I am, you've seen a lot of transitions. Everybody talks about industry cycles and waves. I've seen many, many waves. Met a lot of industry executives and of a little bit of a, an industry historian. When you interview many thousands of people, probably five or 6,000 people as I have over the last half of a decade, you get to interact with a lot of people's knowledge and you begin to develop patterns. And so that's sort of what I bring is, is an ability to catalyze the conversation and, you know, share that knowledge with others in the community. Our philosophy is everybody's expert at something. Everybody's passionate about something and has real deep knowledge about that's something well, we wanna focus in on that area and extract that knowledge and share it with our communities. This is Dave Ante. Thanks for watching the Cube. >>Hello everyone and welcome back to the Cube where we are streaming live this week from CubeCon. I am Savannah Peterson and I am joined by an absolutely stellar lineup of cube brilliance this afternoon. To my left, a familiar face, Lisa Martin. Lisa, how you feeling? End of day two. >>Excellent. It was so much fun today. The buzz started yesterday, the momentum, the swell, and we only heard even more greatness today. >>Yeah, yeah, abs, absolutely. You know, I, I sometimes think we've hit an energy cliff, but it feels like the energy is just >>Continuous. Well, I think we're gonna, we're gonna slide right into tomorrow. >>Yeah, me too. I love it. And we've got two fantastic analysts with us today, Sarge and Keith. Thank you both for joining us. We feel so lucky today. >>Great being back on. >>Thanks for having us. Yeah, Yeah. It's nice to have you back on the show. We were, had you yesterday, but I miss hosting with you. It's been a while. >>It has been a while. We haven't done anything in since, Since pre >>Pandemic, right? Yeah, I think you're >>Right. Four times there >>Be four times back in the day. >>We, I always enjoy whole thing, Lisa, cuz she's so well prepared. I don't have to do any research when I come >>Home. >>Lisa will bring up some, Oh, sorry. Jeep, I see that in 2008 you won this award for Yeah. Being just excellent and I, I'm like, Oh >>Yeah. All right Keith. So, >>So did you do his analysis? >>Yeah, it's all done. Yeah. Great. He only part, he's not sitting next to me too. We can't see it, so it's gonna be like a magic crystal bell. Right. So a lot of people here. You got some stats in terms of the attendees compared >>To last year? Yeah, Priyanka told us we were double last year up to 8,000. We also got the scoop earlier that 2023 is gonna be in Chicago, which is very exciting. >>Oh, that is, is nice. Yeah, >>We got to break that here. >>Excellent. Keith, talk to us about what some of the things are that you've seen the last couple of days. The momentum. What's the vibe? I saw your tweet about the top three things you were being asked. Kubernetes was not one of them. >>Kubernetes were, was not one of 'em. This conference is starting to, it, it still feels very different than a vendor conference. The keynote is kind of, you know, kind of all over the place talking about projects, but the hallway track has been, you know, I've, this is maybe my fifth or sixth CU con in person. And the hallway track is different. It's less about projects and more about how, how do we adjust to the enterprise? How do we Yes. Actually do enterprise things. And it has been amazing watching this community grow. I'm gonna say grow up and mature. Yes. You know, you know, they're not wearing ties yet, but they are definitely understanding kind of the, the friction of implementing new technology in, in an enterprise. >>Yeah. So ge what's your, what's been your take, We were with you yesterday. What's been the take today to take aways? >>NOMA has changed since yesterday, but a few things I think I, I missed talking about that yesterday were that, first of all, let's just talk about Amazon. Amazon earnings came out, it spooked the market and I think it's relevant in this context as well, because they're number one cloud provider. Yeah. And all, I mean, almost all of these technologies on the back of us here, they are related to cloud, right? So it will have some impact on these. Like we have to analyze that. Like will it make the open source go faster or slower in, in lieu of the fact that the, the cloud growth is slowing. Right? So that's, that's one thing that's put that's put that aside. I've been thinking about the, the future of Kubernetes. What is the future of Kubernetes? And in that context, I was thinking like, you know, I think in, when I put a pointer there, I think in tangents, like, what else is around this thing? So I think CN CNCF has been writing the success of Kubernetes. They are, that was their number one flagship project, if you will. And it was mature enough to stand on its own. It it was Google, it's Google's Borg dub da Kubernetes. It's a genericized version of that. Right? So folks who do tech deep down, they know that, Right. So I think it's easier to stand with a solid, you know, project. But when the newer projects come in, then your medal will get tested at cncf. Right. >>And cncf, I mean they've got over 140 projects Yeah. Right now. So there's definitely much beyond >>Kubernetes. Yeah. So they, I have numbers there. 18 graduated, right, 37 in incubation and then 81 in Sandbox stage. They have three stages, right. So it's, they have a lot to chew on and the more they take on, the less, you know, quality you get goes into it. Who is, who's putting the money behind it? Which vendors are sponsoring like cncf, like how they're getting funded up. I think it >>Something I pay attention to as well. Yeah. Yeah. Lisa, I know you've got >>Some insight. Those are the things I was thinking about today. >>I gotta ask you, what's your take on what Keith said? Are you also seeing the maturation of the enterprise here at at coupon? >>Yes, I am actually, when you say enterprise versus what's the other side? Startups, right? Yeah. So startups start using open source a lot more earlier or lot more than enterprises. The enterprise is what they need. Number one thing is the, for their production workloads, they want a vendor sporting them. I said that yesterday as well, right? So it depend depending on the size of the enterprise. If you're a big shop, definitely if you have one of the 500 or Fortune five hundreds and your tech savvy shop, then you can absorb the open source directly coming from the open source sort of universe right. Coming to you. But if you are the second tier of enterprise, you want to go to a provider which is managed service provider, or it can be cloud service provider in this case. Yep. Most of the cloud service providers have multiple versions of Kubernetes, for example. >>I'm not talking about Kubernetes only, but like, but that is one example, right? So at Amazon you can get five different flavors of Kubernetes, right? Fully manage, have, manage all kind of stuff. So people don't have bandwidth to manage that stuff locally. You have to patch it, you have to roll in the new, you know, updates and all that stuff. Like, it's a lot of work for many. So CNCF actually is formed for that reason. Like the, the charter is to bring the quality to open source. Like in other companies they have the release process and they, the stringent guidelines and QA and all that stuff. So is is something ready for production? That's the question when it comes to any software, right? So they do that kind of work and, and, and they have these buckets defined at high level, but it needs more >>Work. Yeah. So one of the things that, you know, kind of stood out to me, I have good friend in the community, Alex Ellis, who does open Fast. It's a serverless platform, great platform. Two years ago or in 2019, there was a serverless day date. And in serverless day you had K Native, you had Open Pass, you had Ws, which is supported by IBM completely, not CNCF platforms. K native came into the CNCF full when Google donated the project a few months ago or a couple of years ago, now all of a sudden there's a K native day. Yes. Not a serverless day, it's a K native day. And I asked the, the CNCF event folks like, what happened to Serverless Day? I missed having open at serverless day. And you know, they, they came out and said, you know what, K native got big enough. >>They came in and I think Red Hat and Google wanted to sponsor a K native day. So serverless day went away. So I think what what I'm interested in and over the next couple of years is, is they're gonna be pushback from the C against the cncf. Is the CNCF now too big? Is it now the gatekeeper for do I have to be one of those 147 projects, right? In order enough to get my project noticed the open, fast, great project. I don't think Al Alex has any desire to have his project hosted by cncf, but it probably deserves, you know, shoulder left recognition with that. So I'm pushing to happen to say, okay, if this is open community, this is open source. If CNC is the place to have the cloud native conversation, what about the projects that's not cncf? Like how do we have that conversation when we don't have the power of a Google right. Or a, or a Lenox, et cetera, or a Lenox Foundation. So GE what, >>What are your thoughts on that? Is, is CNC too big? >>I don't think it's too big. I think it's too small to handle the, what we are doing in open source, right? So it's a bottle. It can become a bottleneck. Okay. I think too big in a way that yeah, it has, it has, it has power from that point of view. It has that cloud, if you will. The people listen to it. If it's CNCF project or this must be good, it's like in, in incubators. Like if you are y white Combinator, you know, company, it must be good. You know, I mean, may not be >>True, but, >>Oh, I think there's a bold assumption there though. I mean, I think everyone's just trying to do the best they can. And when we're evaluating projects, a very different origin and background, it's incredibly hard. Very c and staff is a staff of 30 people. They've got 180,000 people that are contributing to these projects and a thousand maintainers that they're trying to uphold. I think the challenge is actually really great. And to me, I actually look at events as an illustration of, you know, what's the culture and the health of an organization. If I were to evaluate CNCF based on that, I'd say we're very healthy right now. I would say that we're in a good spot. There's a lot of momentum. >>Yeah. I, I think CNCF is very healthy. I'm, I'm appreciative for it being here. I love coupon. It's becoming the, the facto conference to have this conversation has >>A totally >>Different vibe to other, It's a totally different vibe. Yeah. There needs to be a conduit and truth be told, enterprise buyers, to subject's point, this is something that we do absolutely agree on, on enterprise buyers. We want someone to pick winners and losers. We do, we, we don't want a box of Lego dumped on our, the middle of our table. We want somebody to have sorted that out. So while there may be five or six different service mesh solutions, at least the cncf, I can go there and say, Oh, I'll pick between the three or four that are most popular. And it, it's a place to curate. But I think with that curation comes the other side of it. Of how do we, how, you know, without the big corporate sponsor, how do I get my project pushed up? Right? Elevated. Elevated, Yep. And, and put onto the show floor. You know, another way that projects get noticed is that startups will adopt them, Push them. They may not even be, I don't, my CNCF project may not, my product may not even be based on the CNCF product. But the new stack has a booth, Ford has a booth. Nothing to do with a individual prod up, but promoting open source. What happens when you're not sponsored? >>I gotta ask you guys, what do you disagree on? >>Oh, so what, what do we disagree on? So I'm of the mindset, I can, I can say this, I I believe hybrid infrastructure is the future of it. Bar none. If I built my infrastructure, if I built my application in the cloud 10 years ago and I'm still building net new applications, I have stuff that I built 10 years ago that looks a lot like on-prem, what do I do with it? I can't modernize it cuz I don't have the developers to do it. I need to stick that somewhere. And where I'm going to stick that at is probably a hybrid infrastructure. So colo, I'm not gonna go back to the data center, but I'm, I'm gonna look, pick up something that looks very much like the data center and I'm saying embrace that it's the future. And if you're Boeing and you have, and Boeing is a member, cncf, that's a whole nother topic. If you have as 400 s, hpu X, et cetera, stick that stuff. Colo, build new stuff, but, and, and continue to support OpenStack, et cetera, et cetera. Because that's the future. Hybrid is the future. >>And sub g agree, disagree. >>I okay. Hybrid. Nobody can deny that the hybrid is the reality, not the future. It's a reality right now. It's, it's a necessity right now you can't do without it. Right. And okay, hybrid is very relative term. You can be like 10% here, 90% still hybrid, right? So the data center is shrinking and it will keep shrinking. Right? And >>So if by whole is the data center shrinking? >>This is where >>Quick one quick getting guys for it. How is growing by a clip? Yeah, but there's no data supporting. David Lym just came out for a report I think last year that showed that the data center is holding steady, holding steady, not growing, but not shrinking. >>Who sponsored that study? Wait, hold on. So the, that's a question, right? So more than 1 million data centers have been closed. I have, I can dig that through number through somebody like some organizations we published that maybe they're cloud, you know, people only. So the, when you get these kind of statements like it, it can be very skewed statements, right. But if you have seen the, the scene out there, which you have, I know, but I have also seen a lot of data centers walk the floor of, you know, a hundred thousand servers in a data center. I cannot imagine us consuming the infrastructure the way we were going into the future of co Okay. With, with one caveat actually. I am not big fan of like broad strokes. Like make a blanket statement. Oh no, data center's dead. Or if you are, >>That's how you get those esty headlines now. Yeah, I know. >>I'm all about to >>Put a stake in the ground. >>Actually. The, I think that you get more intelligence from the new end, right? A small little details if you will. If you're golden gold manak or Bank of America, you have so many data centers and you will still have data centers because performance matters to you, right? Your late latency matters for applications. But if you are even a Fortune 500 company on the lower end and or a healthcare vertical, right? That your situation is different. If you are a high, you know, growth startup, your situation is different, right? You will be a hundred percent cloud. So cloud gives you velocity, the, the, the pace of change, the pace of experimentation that actually you are buying innovation through cloud. It's proxy for innovation. And that's how I see it. But if you have, if you're stuck with older applications, I totally understand. >>Yeah. So the >>We need that OnPrem. Yeah, >>Well I think the, the bring your fuel sober, what we agree is that cloud is the place where innovation happens. Okay? At some point innovation becomes legacy debt and you have thus hybrid, you are not going to keep your old applications up to date forever. The, the, the math just doesn't add up. And where I differ in opinion is that not everyone needs innovation to keep moving. They need innovation for a period of time and then they need steady state. So Sergeant, we >>Argue about this. I have a, I >>Love this debate though. I say it's efficiency and stability also plays an important role. I see exactly what you're talking about. No, it's >>Great. I have a counter to that. Let me tell you >>Why. Let's >>Hear it. Because if you look at the storage only, right? Just storage. Just take storage computer network for, for a minute. There three cost reps in, in infrastructure, right? So storage earlier, early on there was one tier of storage. You say pay the same price, then now there are like five storage tiers, right? What I'm trying to say is the market sets the price, the market will tell you where this whole thing will go, but I know their margins are high in cloud, 20 plus percent and margin will shrink as, as we go forward. That means the, the cloud will become cheaper relative to on-prem. It, it, in some cases it's already cheaper. But even if it's a stable workload, even in that case, we will have a lower tier of service. I mean, you, you can't argue with me that the cloud versus your data center, they are on the same tier of services. Like cloud is a better, you know, product than your data center. Hands off. >>I love it. We, we are gonna relish in the debates between the two of you. Mic drops. The energy is great. I love it. Perspective. It's not like any of us can quite see through the crystal ball that we have very informed opinions, which is super exciting. Yeah. Lisa, any last thoughts today? >>Just love, I love the debate as well. That, and that's, that's part of what being in this community is all about. So sharing about, sharing opinions, expressing opinions. That's how it grows. That's how, that's how we innovate. Yeah. Obviously we need the cloud, but that's how we innovate. That's how we grow. Yeah. And we've seen that demonstrated the last couple days and I and your, your takes here on the Cuban on Twitter. Brilliant. >>Thank you. I absolutely love it. I'm gonna close this out with a really important analysis on the swag of the show. Yes. And if you know, yesterday we were looking at what is the weirdest swag or most unique swag We had that bucket hat that took the grand prize. Today we're gonna focus on something that's actually quite cool. A lot of the vendors here have really dedicated their swag to being local to Detroit. Very specific in their sourcing. Sonotype here has COOs. They're beautiful. You can't quite feel this flannel, but it's very legit hand sound here in Michigan. I can't say that I've been to too many conferences, if any, where there was this kind of commitment to localizing and sourcing swag from around the corner. We also see this with the Intel booth. They've got screen printers out here doing custom hoodies on spot. >>Oh fun. They're even like appropriately sized. They had local artists do these designs and if you're like me and you care about what's on your wrist, you're familiar with Shinola. This is one of my favorite swags that's available. There is a contest. Oh going on. Hello here. Yeah, so if you are Atan, make sure that you go and check this out. The we, I talked about this on the show. We've had the founder on the show or the CEO and yeah, I mean Shine is just full of class as since we are in Detroit as well. One of the fun themes is cars. >>Yes. >>And Storm Forge, who are also on the show, is actually giving away an Aston Martin, which is very exciting. Not exactly manufactured in Detroit. However, still very cool on the car front and >>The double oh seven version named the best I >>Know in the sixties. It's love it. It's very cool. Two quick last things. We talk about it a lot on the show. Every company now wants to be a software company. Yep. On that vein, and keeping up with my hat theme, the Home Depot is here because they want everybody to know that they in fact are a technology company, which is very cool. They have over 500,000 employees. You can imagine there's a lot of technology that has to go into keeping Napa. Absolutely. Yep. Wild to think about. And then last, but not at least very quick, rapid fire, best t-shirt contest. If you've ever ran to one of these events, there are a ton of T-shirts out there. I rate them on two things. Wittiest line and softness. If you combine the two, you'll really be our grand champion for the year. I'm just gonna hold these up and set them down for your laughs. Not afraid to commit, which is pretty great. This is another one designed by locals here. Detroit Code City. Oh, love it. This one made me chuckle the most. Kiss my cash. >>Oh, that's >>Good. These are also really nice and soft, which is fantastic. Also high on the softness category is this Op Sarah one. I also like their bird logo. These guys, there's just, you know, just real nice touch. So unfortunately, if you have the fumble, you're not here with us, live in Detroit. At least you're gonna get taste of the swag. I taste of the stories and some smiles hear from those of us on the cube. Thank you both so much for being here with us. Lisa, thanks for another fabulous day. Got it, girl. My name's Savannah Peterson. Thank you for joining us from Detroit. We're the cube and we can't wait to see you tomorrow.

Published Date : Oct 28 2022

SUMMARY :

And who says TEUs had got a little ass more skin in the game for as I have over the last half of a decade, you get to interact with a lot of people's knowledge Lisa, how you feeling? It was so much fun today. but it feels like the energy is just Thank you both for joining us. It's nice to have you back on the show. We haven't done anything in since, Since pre Right. I don't have to do any research when I come Jeep, I see that in 2008 you won this award You got some stats in terms of the attendees compared We also got the scoop earlier Oh, that is, is nice. What's the vibe? You know, you know, they're not wearing ties yet, but they are definitely understanding kind What's been the take today I was thinking like, you know, I think in, when I put a pointer So there's definitely much the less, you know, quality you get goes into it. Something I pay attention to as well. Those are the things I was thinking about today. So it depend depending on the size of the enterprise. You have to patch it, you have to roll in the new, I have good friend in the community, Alex Ellis, who does open Fast. If CNC is the place to have the cloud native conversation, what about the projects that's Like if you are y white Combinator, you know, I actually look at events as an illustration of, you know, what's the culture and the health of an organization. I love coupon. I don't, my CNCF project may not, my product may not even be based on the CNCF I can't modernize it cuz I don't have the developers to do it. So the data How is growing by a clip? the floor of, you know, a hundred thousand servers in a data center. That's how you get those esty headlines now. So cloud gives you velocity, the, the, We need that OnPrem. hybrid, you are not going to keep your old applications up to date forever. I have a, I I see exactly what you're talking about. I have a counter to that. Like cloud is a better, you know, It's not like any of us can quite see through the crystal ball that we have Just love, I love the debate as well. And if you know, yesterday we were looking at what is the weirdest swag or most unique like me and you care about what's on your wrist, you're familiar with Shinola. And Storm Forge, who are also on the show, is actually giving away an Aston Martin, If you combine the two, you'll really be our grand champion for We're the cube and we can't wait to see you tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LenoxORGANIZATION

0.99+

BoeingORGANIZATION

0.99+

PriyankaPERSON

0.99+

Lisa MartinPERSON

0.99+

fiveQUANTITY

0.99+

LisaPERSON

0.99+

Alex EllisPERSON

0.99+

KeithPERSON

0.99+

David LymPERSON

0.99+

ChicagoLOCATION

0.99+

DetroitLOCATION

0.99+

GoogleORGANIZATION

0.99+

2008DATE

0.99+

MichiganLOCATION

0.99+

SargePERSON

0.99+

Savannah PetersonPERSON

0.99+

AmazonORGANIZATION

0.99+

10%QUANTITY

0.99+

IBMORGANIZATION

0.99+

FordORGANIZATION

0.99+

threeQUANTITY

0.99+

30 peopleQUANTITY

0.99+

Dave AntePERSON

0.99+

fourQUANTITY

0.99+

90%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

last yearDATE

0.99+

CNCFORGANIZATION

0.99+

yesterdayDATE

0.99+

Home DepotORGANIZATION

0.99+

2019DATE

0.99+

Lenox FoundationORGANIZATION

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

37QUANTITY

0.99+

one tierQUANTITY

0.99+

147 projectsQUANTITY

0.99+

second tierQUANTITY

0.99+

180,000 peopleQUANTITY

0.99+

tomorrowDATE

0.99+

KubeConEVENT

0.99+

81QUANTITY

0.99+

TodayDATE

0.99+

over 500,000 employeesQUANTITY

0.99+

Two years agoDATE

0.99+

18QUANTITY

0.99+

Robert WorshipPERSON

0.99+

JeepORGANIZATION

0.99+

LegoORGANIZATION

0.99+

Bank of AmericaORGANIZATION

0.98+

KubernetesTITLE

0.98+

Four timesQUANTITY

0.98+

10 years agoDATE

0.98+

6,000 peopleQUANTITY

0.98+

GEORGANIZATION

0.98+

bothQUANTITY

0.98+

five storage tiersQUANTITY

0.98+

sixthQUANTITY

0.98+

CloudNativeConEVENT

0.98+

KubeCon Keynote Analysis | KubeCon + CloudNativeCon NA 2022


 

(upbeat techno music) >> Hello, everyone. Welcome to theCUBE here live in Detroit for KubeCon + CloudNativeCon 2022. I'm John Furrier, host of theCUBE. This is our seventh consecutive KubeCon + CloudNativeCon. Since inception, theCube's been there every year. And of course, theCUBE continues to grow. So does the community as well as our host roster. I'm here with my co-host, Lisa Martin. Lisa, great to see you. And our new theCube host, Savannah Peterson. Savannah, welcome to theCUBE. >> Thanks, John. >> Welcome. >> Welcome to the team. >> Thanks, team. It's so wonderful to be here. I met you all last KubeCon and to be sitting on this stage in your company is honestly an honor. >> Well, great to have you. Lisa and I have done a lot of shows together and it's great to have more cadence around. You know, more fluid around the content, and also the people. And I would like you to take a minute to tell people your background. You know the community here. What's the roots? You know the Cloud Native world pretty well. >> I know it as well as someone my age can. As we know, the tools and the tech is always changing. So hello, everyone. I'm Savannah Peterson. You can find me on the internet @SavIsSavvy. Would love to hear from you during the show. Big fan of this space and very passionate about DevOps. I've been working in the Silicon Valley and the Silicon Alley for a long time, helping companies scale internationally as a community builder as well as a international public speaker. And honestly, this is just such a fun evolution for my career and I'm grateful to be here with you both. >> We're looking forward to having you on theCUBE. Appreciate it. Lisa? >> Yes. >> KubeCon. Amazing again this year. Just keeps growing bigger and bigger. >> Yes. >> Keynote review, you were in there. >> Yup. >> I had a chance to peek in a little bit, but you were there and got most of the news. What was the action? >> You know, the action was really a big focus around the maintainers, what they're doing, giving them the props and the kudos and the support that they deserve. Not just physically, but mentally as well. That was a really big focus. It was also a big focus on mentoring and really encouraging more people- >> Love that. >> I did, too. I thought that was fantastic to get involved to help others. And then they showed some folks that had great experiences, really kind of growing up within the community. Probably half of the keynote focus this morning was on that. And then looking at some of the other projects that have graduated from CNCF, some of these successful projects, what they're doing, what folks are doing. Cruise, one of the ones that was featured. You've probably seen their driverless cars around San Francisco. So it was great to see that, the successes that they've had and where that's going. >> Yeah. Lisa, we've done how many shows? Hundreds of shows together. When you see a show like this grow and continue to mature, what's your observation? You've seen many shows we've hosted together. What jumps out this year? Is it just that level of maturization? What's your take on this? >> The maturization of the community and the collaboration of the community. I think those two things jumped out at me even more than last year. Last year, obviously a little bit smaller event in North America. It was Los Angeles. This year you got a much stronger sense of the community, the support that they have for each other. There were a lot of standing ovations particularly when the community came out and talked about what they were doing in Ukraine to support fellow community members in Ukraine and also to support other Ukrainians in terms of getting in to tech. Lot of standing ovations. Lot of- >> Savannah: Love that, yeah. >> Real authenticity around the community. >> Yeah, Savannah, we talked on our intro prior to the event about how inclusive this community is. They are really all in on inclusivity. And the Ukraine highlight, this community is together and they're open. They're open to everybody. >> Absolutely. >> And they're also focused on growing the educational knowledge. >> Yeah, I think there's a real celebration of curiosity within this community that we don't find in certain other sectors. And we saw it at dinner last night. I mean, I was struck just like you Lisa walking in today. The energy in that room is palpably different from last year. I saw on Twitter this morning, people are very excited. Many people, their first KubeCon. And I'm sure we're going to be feeding off of that, that kind of energy and that... Just a general enthusiasm and excitement to be here in Detroit all week. It's a treat. >> Yeah, I even saw Stu Miniman earlier, former theCube host. He's at Red Hat. We were talking on the way in and he made an observation I thought was interesting I'll bring up because this show, it's a lot "What is this show? What isn't this show?" And I think this show is about developers. What it isn't is not a business show. It's not about business. It's not about industry kind of posturing or marketing. All the heavy hitters on the dev side are here and you don't see the big execs. I mean, you got the CEOs of startups here but not the CEOs of the big public companies. We see the doers. So, I mean, I think my take is this show's about creating products for builders and creating products that people can consume. And I think that is the Cloud Native lanes that are starting to form. You're either creating something for builders to build stuff with or you're creating stuff that could be consumed. And that seems for applications. So the whole app side and services seem to be huge. >> They also did a great job this morning of showcasing some of the big companies that we all know and love. Spotify. Obviously, I don't think a day goes by where I don't turn on Spotify. And what it's done- >> Me neither. >> What it's done for the community... Same with Intuit, I'm a user of both. Intuit was given an End User Award this morning during the keynote for their contributions, what they're doing. But it was nice to see some just everyday companies, Cloud Native companies that we all know and love, and to understand their contributions to the community and how those contributions are affecting all of us as end users. >> Yeah, and I think those companies like Intuit... Argo's been popular, Arlo now new, seeing those services, and even enterprises are contributing. You know, Lyft is always here, popular with Envoy. The community isn't just vendors and that's the interesting thing. >> I think that's why it works. To me, this event is really about the celebration of developer relations. I mean, every DevRel from every single one of these companies is here. Like you said, in lieu of the executive, that's essentially who we're attracting. And if you look out over the show floor here, I mean, we've probably got, I don't know, three to four extra vendors that we had last year. It totally is a different tone. This community doesn't like to be sold to. This community likes to be collaborative. They like to learn and they like to help. And I think we see that within the ecosystem inside the room today. >> It's not a top down sales pitch. It's really consensus. >> No. >> Do it out in the open transparency. Don't sell me stuff. And I think the other thing I like about this community is that we're starting to see that... And then we've said this in theCube before. We'll say it again. Maybe be more controversial. Digital transformation is about the developer, right? And I think the power is going to shift in every company to the developer because if you take digital transformation to completion, everything happens the way it's happening, the company is the application. It's not IT who serves the organization- >> I love thinking about it like that. That's a great point, John. >> The old phase was IT was a department that served the business. Well, the business is IT now. So that means developer community is going to grow like crazy and they're going to be in the front lines driving all the change. In my opinion, you going to see this developer community grow like crazy and then the business side on industry will match up with that. I think that's what's going to happen. >> So, the developers are becoming the influencers? >> Developers are the power source for all companies. They're in charge. They're going to dictate terms to how businesses will run because that's going to be natural 'cause digital transformation's about the app and the business is the app. So that mean it has to be coded. So I think you're going to see a lot of innovation around app server-like experiences where the the apps are just being developed faster than the infrastructures enabling that completely invisible. And I think you're going to see this kind of architecture-less, I'll put it out there that term architecture-less, environment where you don't need an architecture. It's just you code away. >> Yeah, yeah. We saw GitHub's mentioned in the keynote this morning. And I mean, low code, no code. I think your fingers right on the pulse there. >> Yeah. What did you guys see? Anything else you see? >> I think just the overall... To your point, Savannah, the energy. Definitely higher than last year. When I saw those standing ovations, people really come in together around the sense of community and what they've accomplished especially in the last two plus years of being remote. They did a great job of involving a lot of folks, some of whom are going to be on the program with us this week that did remote parts of the keynote. One of our guests on today from Vitess was talking about the successes and the graduation of their program so that the sense of community, but also not just the sense of it, the actual demonstration of it was also quite palpable this morning, and I think that's something that I'm excited for us to hear about with our guests on the program this week. >> Yeah, and I think the big story coming out so far as the show starts is the developers are in charge. They're going to set the pace for all the ops, data ops, security ops, all operations. And then the co-located events that were held Monday and Tuesday prior to kickoff today. You saw WebAssembly's come out of the woodwork as it got a lot of attention. Two startups got funded heavily on Series A. You're starting to see that project really work well. That's going to be an additional to the container market. So, interesting to see how Docker reacts to that. Red Hat's doing great. ServiceMeshCon was phenomenal. I saw Solo.iOS got massive traction with those guys. So like Service Mesh, WebAssembly, you can start to see the dots connecting. You're starting to see this layer below Kubernetes and then a layer above Kubernetes developing. So I think it's going to be great for applications and great for the infrastructure. I think we'll see how it comes out and all these companies we have on here are all about faster, more integrated, some very, very interesting to see. So far, so good. >> You guys talked about in your highlight session last week or so. Excited to hear about the end users, the customer stories. That's what I'm interested in understanding as well. It's why it resonates with me when I see brands that I recognize. Well, I use it every day. How are they using containers and Kubernetes? How are they actually not just using it to deploy their app, their technologies, that we all expect are going to be up 24/7, but how are they also contributing to the development of it? So I'm really excited to hear those end users. >> We're going to have Lockheed Martin. And we wrote a story on SiliconANGLE, the Red Hat, Lockheed Martin, real innovation on the edge. You're starting to see educate with the edge. It's really the industrial edge coming to be big. It'd be very interesting to see. >> Absolutely, we got Ford Motor Company coming on as well. I always loved stories, Savannah, that are history of companies. Ford's been around since 1903. How is a company that- >> Well, we're in the home of Ford- as well here. >> We are. How they evolved digitally? What are they doing to enable the developers to be those influencers that John says? It's going to be them. >> They're a great example of a company that's always been on the forefront, too. I mean, they had a head of VRs 25 years ago when most people didn't even know what VR was going to stand for. So, I can't wait for that one. You tease the Docker interview coming up very well, John. I'm excited for that one. One last thing I want to bring up that I think is really refreshing and it's reflected right here on this stage is you talked about the inclusion. I think there's a real commitment to diversity here. You can see the diversity stats on CNCF's website. It's right there on KubeCon. At the bottom, there's a link in every email I've gotten highlighting that. We've got two women on this stage all week which is very exciting. And the opening keynote was a woman. So quite frankly, I am happy as a female in this industry to see a bit more representation. And I do appreciate just on the note of being inclusive, it's not just about gender or age, it's also about the way that CNCF thinks about your experience since we're in this kind of pandemic transitional period. They've got little pins. Last year, we had bracelets depending on your level of comfort. Equivocally like a stoplight which is... I just think it's really nice and sensitive and that attention to detail makes people feel comfortable. Which is why we have the community energy that we have. >> Yeah, and being 12 years in the business... With theCUBE, we've been 12 years in the business, seven years with KubeCon and Cloud Native, I really appreciate the Linux Foundation including me as I get older. (Lisa and Savannah laugh) >> Savannah: That's a good point. >> Ageism were, "Hey!" Thank you. >> There was a lot of representation. You talked about females and so often we go to shows and there's very few females. Some companies are excellent at it. But from an optics perspective, to me it stands out. There was great representation across. There was disabled people on stage, people of color, women, men of all ages. It was very well-orchestrated. >> On the demographic- >> And sincere. >> Yeah, yeah. >> And the demographics, too. On the age side, it's lower too. You're starting to see younger... I mean, high school, college representation. I saw a lot of college students last night. I saw on the agenda sessions targeting universities. I mean, I'm telling you this is reaching down. Open source now is so great. It's growing so fast. It's continuing to thunder away. And with success, it's just getting better and better. In fact, we were talking last night about at some point we might not have to write code. Just glue it together. And that's why I think the supply chain and security thing is an issue. But this is why it's so great. Anyone can code and I think there's a lot of learning to have. So, I think we'll continue to do our job to extract the signal from the noise. So, thanks for the kickoff. Good commentary. Thank you. All right. >> Of course. >> Let's get started. Day one of three days of live coverage here at KubeCon + CloudNativeCon. I'm John Furrier with Lisa Martin, and Savannah Peterson. Be back with more coverage starting right now. (gentle upbeat music)

Published Date : Oct 27 2022

SUMMARY :

And of course, theCUBE continues to grow. and to be sitting on this stage and also the people. to be here with you both. to having you on theCUBE. Amazing again this year. I had a chance to peek in a little bit, and the support that they deserve. Cruise, one of the ones that was featured. grow and continue to mature, and the collaboration of the community. And the Ukraine highlight, on growing the educational knowledge. to be here in Detroit all week. And I think this show is about developers. of showcasing some of the big companies and to understand their and that's the interesting thing. I don't know, three to four extra vendors It's not a top down sales pitch. And I think the power is going to shift I love thinking about it like that. and they're going to be in the front lines and the business is the app. in the keynote this morning. Anything else you see? and the graduation of their program and great for the infrastructure. going to be up 24/7, It's really the industrial I always loved stories, Savannah, as well here. It's going to be them. And the opening keynote was a woman. I really appreciate the Linux Foundation Thank you. to me it stands out. I saw on the agenda sessions Martin, and Savannah Peterson.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SavannahPERSON

0.99+

Lisa MartinPERSON

0.99+

Savannah PetersonPERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

San FranciscoLOCATION

0.99+

UkraineLOCATION

0.99+

DetroitLOCATION

0.99+

FordORGANIZATION

0.99+

Los AngelesLOCATION

0.99+

John FurrierPERSON

0.99+

North AmericaLOCATION

0.99+

12 yearsQUANTITY

0.99+

Ford Motor CompanyORGANIZATION

0.99+

Last yearDATE

0.99+

12 yearsQUANTITY

0.99+

seven yearsQUANTITY

0.99+

last yearDATE

0.99+

Red HatORGANIZATION

0.99+

Lockheed MartinORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

MondayDATE

0.99+

KubeConEVENT

0.99+

CNCFORGANIZATION

0.99+

TuesdayDATE

0.99+

GitHubORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

LyftORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

two thingsQUANTITY

0.99+

last nightDATE

0.99+

threeQUANTITY

0.99+

last weekDATE

0.99+

Hundreds of showsQUANTITY

0.99+

CloudNativeConEVENT

0.99+

three daysQUANTITY

0.99+

1903DATE

0.99+

ArloORGANIZATION

0.99+

bothQUANTITY

0.98+

this weekDATE

0.98+

This yearDATE

0.98+

two womenQUANTITY

0.98+

SpotifyORGANIZATION

0.98+

ArgoORGANIZATION

0.98+

Silicon AlleyLOCATION

0.98+

Stu MinimanPERSON

0.98+

@SavIsSavvyPERSON

0.97+

KubernetesTITLE

0.96+

Solo.iOSTITLE

0.96+

this yearDATE

0.96+

this morningDATE

0.96+

25 years agoDATE

0.95+

oneQUANTITY

0.95+

David Flynn Supercloud Audio


 

>> From every ISV to solve the problems. You want there to be tools in place that you can use, either open source tools or whatever it is that help you build it. And slowly over time, that building will become easier and easier. So my question to you was, where do you see you playing? Do you see yourself playing to ISVs as a set of tools, which will make their life a lot easier and provide that work? >> Absolutely. >> If they don't have, so they don't have to do it. Or you're providing this for the end users? Or both? >> So it's a progression. If you go to the ISVs first, you're doomed to starved before you have time for that other option. >> Yeah. >> Right? So it's a question of phase, the phasing of it. And also if you go directly to end users, you can demonstrate the power of it and get the attention of the ISVs. I believe that the ISVs, especially those with the biggest footprints and the most, you know, coveted estates, they have already made massive investments at trying to solve decentralization of their software stack. And I believe that they have used it as a hook to try to move to a software as a service model and rope people into leasing their infrastructure. So if you look at the clouds that have been propped up by Autodesk or by Adobe, or you name the company, they are building proprietary makeshift solutions for decentralizing or hybrid clouding. Or maybe they're not even doing that at all and all they're is saying hey, if you want to get location agnosticness, then what you should just, is just move into our cloud. >> Right. >> And then they try to solve on the background how to decentralize it between different regions so they can have decent offerings in each region. But those who are more advanced have already made larger investments and will be more averse to, you know, throwing that stuff away, all of their makeshift machinery away, and using a platform that gives them high performance parallel, low level file system access, while at the same time having metadata-driven, you know, policy-based, intent-based orchestration to manage the diffusion of data across a decentralized infrastructure. They are not going to be as open because they've made such an investment and they're going to look at how do they monetize it. So what we have found with like the movie studios who are using us already, many of the app they're using, many of those software offerings, the ISVs have their own cloud that offers that software for the cloud. But what we got when I asked about this, 'cause I was dealt specifically into this question because I'm very interested to know how we're going to make that leap from end user upstream into the ISVs where I believe we need to, and they said, look, we cannot use these software ISV-specific SAS clouds for two reasons. Number one is we lose control of the data. We're giving it to them. That's security and other issues. And here you're talking about we're doing work for Disney, we're doing work for Netflix, and they're not going to let us put our data on those software clouds, on those SAS clouds. Secondly, in any reasonable pipeline, the data is shared by many different applications. We need to be agnostic as to the application. 'Cause the inputs to one application, you know, the output for one application provides the input to the next, and it's not necessarily from the same vendor. So they need to have a data platform that lets them, you know, go from one software stack, and you know, to run it on another. Because they might do the rendering with this and yet, they do the editing with that, and you know, et cetera, et cetera. So I think the further you go up the stack in the structured data and dedicated applications for specific functions in specific verticals, the further up the stack you go, the harder it is to justify a SAS offering where you're basically telling the end users you need to park all your data with us and then you can run your application in our cloud and get this. That ultimately is a dead end path versus having the data be open and available to many applications across this supercloud layer. >> Okay, so-- >> Is that making any sense? >> Yes, so if I could just ask a clarifying question. So, if I had to take Snowflake as an example, I think they're doing exactly what you're saying is a dead end, put everything into our proprietary system and then we'll figure out how to distribute it. >> Yeah. >> And and I think if you're familiar with Zhamak Dehghaniis' data mesh concept. Are you? >> A little bit, yeah. >> But in her model, Snowflake, a Snowflake warehouse is just a node on the mesh and that mesh is-- >> That's right. >> Ultimately the supercloud and you're an enabler of that is what I'm hearing. >> That's right. What they're doing up at the structured level and what they're talking about at the structured level we're doing at the underlying, unstructured level, which by the way has implications for how you implement those distributed database things. In other words, implementing a Snowflake on top of Hammerspace would have made building stuff like in the first place easier. It would allow you to easily shift and run the database engine anywhere. You still have to solve how to shard and distribute at the transaction layer above, so I'm not saying we're a substitute for what you need to do at the app layer. By the way, there is another example of that and that's Microsoft Office, right? It's one thing to share that, to have a file share where you can share all the docs. It's something else to have Word and PowerPoint, Excel know how to allow people to be simultaneously editing the same doc. That's always going to happen in the app layer. But not all applications need that level of, you know, in-app decentralization. You know, many of them, many workflows are pipelined, especially the ones that are very data intensive where you're doing drug discovery or you're doing rendering, or you're doing machine learning training. These things are human in the loop with large stages of processing across tens of thousands of cores. And I think that kind of data processing pipeline is what we're focusing on first. Not so much the Microsoft Office or the Snowflake, you know, parking a relational database because that takes a lot of application layer stuff and that's what they're good at. >> Right. >> But I think... >> Go ahead, sorry. >> Later entrance in these markets will find Hammerspace as a way to accelerate their work so they can focus more narrowly on just the stuff that's app-specific, higher level sharing in the app. >> Yes, Snowflake founders-- >> I think it might be worth mentioning also, just keep this confidential guys, but one of our customers is Blue Origin. And one of the things that we have found is kind of the point of what you're talking about with our customers. They're needing to build this and since it's not commercially available or they don't know where to look for it to be commercially available, they're all building themselves. So this layer is needed. And Blue is just one of the examples of quite a few we're now talking to. And like manufacturing, HPC, research where they're out trying to solve this problem with their own scripting tools and things like that. And I just, I don't know if there's anything you want to add, David, but you know, but there's definitely a demand here and customers are trying to figure out how to solve it beyond what Hammerspace is doing. Like the need is so great that they're just putting developers on trying to do it themselves. >> Well, and you know, Snowflake founders, they didn't have a Hammerspace to lean on. But, one of the things that's interesting about supercloud is we feel as though industry clouds will emerge, that as part of company's digital transformations, they will, you know, every company's a software company, they'll begin to build their own clouds and they will be able to use a Hammerspace to do that. >> A super pass layer. >> Yes. It's really, I don't know if David's speaking, I don't want to speak over him, but we can't hear you. May be going through a bad... >> Well, a regional, regional talks that make that possible. And so they're doing these render farms and editing farms, and it's a cloud-specific to the types of workflows in the median entertainment world. Or clouds specifically to workflows in the chip design world or in the drug and bio and life sciences exploration world. There are large organizations that are kind of a blend of end users, like the Broad, which has their own kind of cloud where they're asking collaborators to come in and work with them. So it starts to even blur who's an end user versus an ISV. >> Yes. >> Right? When you start talking about the massive data is the main gravity is to having lots of people participate. >> Yep, and that's where the value is. And that's where the value is. And this is a megatrend that we see. And so it's really important for us to get to the point of what is and what is not a supercloud and, you know, that's where we're trying to evolve. >> Let's talk about this for a second 'cause I want to, I want to challenge you on something and it's something that I got challenged on and it has led me to thinking differently than I did at first, which Molly can attest to. Okay? So, we have been looking for a way to talk about the concept of cloud of utility computing, run anything anywhere that isn't addressed in today's realization of cloud. 'Cause today's cloud is not run anything anywhere, it's quite the opposite. You park your data in AWS and that's where you run stuff. And you pretty much have to. Same with with Azure. They're using data gravity to keep you captive there, just like the old infrastructure guys did. But now it's even worse because it's coupled back with the software to some degree, as well. And you have to use their storage, networking, and compute. It's not, I mean it fell back to the mainframe era. Anyhow, so I love the concept of supercloud. By the way, I was going to suggest that a better term might be hyper cloud since hyper speaks to the multidimensionality of it and the ability to be in a, you know, be in a different dimension, a different plane of existence kind of thing like hyperspace. But super and hyper are somewhat synonyms. I mean, you have hyper cars and you have super cars and blah, blah, blah. I happen to like hyper maybe also because it ties into the whole Hammerspace notion of a hyper-dimensional, you know, reality, having your data centers connected by a wormhole that is Hammerspace. But regardless, what I got challenged on is calling it something different at all versus simply saying, this is what cloud has always meant to be. This is the true cloud, this is real cloud, this is cloud. And I think back to what happened, you'll remember, at Fusion IO we talked about IO memory and we did that because people had a conceptualization of what an SSD was. And an SSD back then was low capacity, low endurance, made to go military, aerospace where things needed to be rugged but was completely useless in the data center. And we needed people to imagine this thing as being able to displace entire SAND, with the kind of capacity density, performance density, endurance. And so we talked IO memory, we could have said enterprise SSD, and that's what the industry now refers to for that concept. What will people be saying five and 10 years from now? Will they simply say, well this is cloud as it was always meant to be where you are truly able to run anything anywhere and have not only the same APIs, but you're same data available with high performance access, all forms of access, block file and object everywhere. So yeah. And I wonder, and this is just me throwing it out there, I wonder if, well, there's trade offs, right? Giving it a new moniker, supercloud, versus simply talking about how cloud is always intended to be and what it was meant to be, you know, the real cloud or true cloud, there are trade-offs. By putting a name on it and branding it, that lets people talk about it and understand they're talking about something different. But it also is that an affront to people who thought that that's what they already had. >> What's different, what's new? Yes, and so we've given a lot of thought to this. >> Right, it's like you. >> And it's because we've been asked that why does the industry need a new term, and we've tried to address some of that. But some of the inside baseball that we haven't shared is, you remember the Web 2.0, back then? >> Yep. >> Web 2.0 was the same thing. And I remember Tim Burners Lee saying, "Why do we need Web 2.0? "This is what the Web was always supposed to be." But the truth is-- >> I know, that was another perfect-- >> But the truth is it wasn't, number one. Number two, everybody hated the Web 2.0 term. John Furrier was actually in the middle of it all. And then it created this groundswell. So one of the things we wrote about is that supercloud is an evocative term that catalyzes debate and conversation, which is what we like, of course. And maybe that's self-serving. But yeah, HyperCloud, Metacloud, super, meaning, it's funny because super came from Latin supra, above, it was never the superlative. But the superlative was a convenient byproduct that caused a lot of friction and flack, which again, in the media business is like a perfect storm brewing. >> The bad thing to have to, and I think you do need to shake people out of their, the complacency of the limitations that they're used to. And I'll tell you what, the fact that you even have the terms hybrid cloud, multi-cloud, private cloud, edge computing, those are all just referring to the different boundaries that isolate the silo that is the current limited cloud. >> Right. >> So if I heard correctly, what just, in terms of us defining what is and what isn't in supercloud, you would say traditional applications which have to run in a certain place, in a certain cloud can't run anywhere else, would be the stuff that you would not put in as being addressed by supercloud. And over time, you would want to be able to run the data where you want to and in any of those concepts. >> Or even modern apps, right? Or even modern apps that are siloed in SAS within an individual cloud, right? >> So yeah, I guess it's twofold. Number one, if you're going at the high application layers, there's lots of ways that you can give the appearance of anything running anywhere. The ISV, the SAS vendor can engineer stuff to have the ability to serve with low enough latency to different geographies, right? So if you go too high up the stack, it kind of loses its meaning because there's lots of different ways to make due and give the appearance of omni-presence of the service. Okay? As you come down more towards the platform layer, it gets harder and harder to mask the fact that supercloud is something entirely different than just a good regionally-distributed SAS service. So I don't think you, I don't think you can distinguish supercloud if you go too high up the stack because it's just SAS, it's just a good SAS service where the SAS vendor has done the hard work to give you low latency access from different geographic regions. >> Yeah, so this is one of the hardest things, David. >> Common among them. >> Yeah, this is really an important point. This is one of the things I've had the most trouble with is why is this not just SAS? >> So you dilute your message when you go up to the SAS layer. If you were to focus most of this around the super pass layer, the how can you host applications and run them anywhere and not host this, not run a service, not have a service available everywhere. So how can you take any application, even applications that are written, you know, in a traditional legacy data center fashion and be able to run them anywhere and have them have their binaries and their datasets and the runtime environment and the infrastructure to start them and stop them? You know, the jobs, the, what the Kubernetes, the job scheduler? What we're really talking about here, what I think we're really talking about here is building the operating system for a decentralized cloud. What is the operating system, the operating environment for a decentralized cloud? Where you can, and that the main two functions of an operating system or an operating environment are the process scheduler, the thing that's scheduling what is running where and when and so forth, and the file system, right? The thing that's supplying a common view and access to data. So when we talk about this, I think that the strongest argument for supercloud is made when you go down to the platform layer and talk of it, talk about it as an operating environment on which you can run all forms of applications. >> Would you exclude--? >> Not a specific application that's been engineered as a SAS. (audio distortion) >> He'll come back. >> Are you there? >> Yeah, yeah, you just cut out for a minute. >> I lost your last statement when you broke up. >> We heard you, you said that not the specific application. So would you exclude Snowflake from supercloud? >> Frankly, I would. I would. Because, well, and this is kind of hard to do because Snowflake doesn't like to, Frank doesn't like to talk about Snowflake as a SAS service. It has a negative connotation. >> But it is. >> I know, we all know it is. We all know it is and because it is, yes, I would exclude them. >> I think I actually have him on camera. >> There's nothing in common. >> I think I have him on camera or maybe Benoit as saying, "Well, we are a SAS." I think it's Slootman. I think I said to Slootman, "I know you don't like to say you're a SAS." And I think he said, "Well, we are a SAS." >> Because again, if you go to the top of the application stack, there's any number of ways you can give it location agnostic function or you know, regional, local stuff. It's like let's solve the location problem by having me be your one location. How can it be decentralized if you're centralizing on (audio distortion)? >> Well, it's more decentralized than if it's all in one cloud. So let me actually, so the spectrum. So again, in the spirit of what is and what isn't, I think it's safe to say Hammerspace is supercloud. I think there's no debate there, right? Certainly among this crowd. And I think we can all agree that Dell, Dell Storage is not supercloud. Where it gets fuzzy is this Snowflake example or even, how about a, how about a Cohesity that instantiates its stack in different cloud regions in different clouds, and synchronizes, however magic sauce it does that. Is that a supercloud? I mean, so I'm cautious about having too strict of a definition 'cause then only-- >> Fair enough, fair enough. >> But I could use your help and thoughts on that. >> So I think we're talking about two different spectrums here. One is the spectrum of platform to application-specific. As you go up the application stack and it becomes this specific thing. Or you go up to the more and more structured where it's serving a specific application function where it's more of a SAS thing. I think it's harder to call a SAS service a supercloud. And I would argue that the reason there, and what you're lacking in the definition is to talk about it as general purpose. Okay? Now, that said, a data warehouse is general purpose at the structured data level. So you could make the argument for why Snowflake is a supercloud by saying that it is a general purpose platform for doing lots of different things. It's just one at a higher level up at the structured data level. So one spectrum is the high level going from platform to, you know, unstructured data to structured data to very application-specific, right? Like a specific, you know, CAD/CAM mechanical design cloud, like an Autodesk would want to give you their cloud for running, you know, and sharing CAD/CAM designs, doing your CAD/CAM anywhere stuff. Well, the other spectrum is how well does the purported supercloud technology actually live up to allowing you to run anything anywhere with not just the same APIs but with the local presence of data with the exact same runtime environment everywhere, and to be able to correctly manage how to get that runtime environment anywhere. So a Cohesity has some means of running things in different places and some means of coordinating what's where and of serving diff, you know, things in different places. I would argue that it is a very poor approximation of what Hammerspace does in providing the exact same file system with local high performance access everywhere with metadata ability to control where the data is actually instantiated so that you don't have to wait for it to get orchestrated. But even then when you do have to wait for it, it happens automatically and so it's still only a matter of, well, how quick is it? And on the other end of the spectrum is you could look at NetApp with Flexcache and say, "Is that supercloud?" And I would argue, well kind of because it allows you to run things in different places because it's a cache. But you know, it really isn't because it presumes some central silo from which you're cacheing stuff. So, you know, is it or isn't it? Well, it's on a spectrum of exactly how fully is it decoupling a runtime environment from specific locality? And I think a cache doesn't, it stretches a specific silo and makes it have some semblance of similar access in other places. But there's still a very big difference to the central silo, right? You can't turn off that central silo, for example. >> So it comes down to how specific you make the definition. And this is where it gets kind of really interesting. It's like cloud. Does IBM have a cloud? >> Exactly. >> I would say yes. Does it have the kind of quality that you would expect from a hyper-scale cloud? No. Or see if you could say the same thing about-- >> But that's a problem with choosing a name. That's the problem with choosing a name supercloud versus talking about the concept of cloud and how true up you are to that concept. >> For sure. >> Right? Because without getting a name, you don't have to draw, yeah. >> I'd like to explore one particular or bring them together. You made a very interesting observation that from a enterprise point of view, they want to safeguard their store, their data, and they want to make sure that they can have that data running in their own workflows, as well as, as other service providers providing services to them for that data. So, and in in particular, if you go back to, you go back to Snowflake. If Snowflake could provide the ability for you to have your data where you wanted, you were in charge of that, would that make Snowflake a supercloud? >> I'll tell you, in my mind, they would be closer to my conceptualization of supercloud if you can instantiate Snowflake as software on your own infrastructure, and pump your own data to Snowflake that's instantiated on your own infrastructure. The fact that it has to be on their infrastructure or that it's on their, that it's on their account in the cloud, that you're giving them the data and they're, that fundamentally goes against it to me. If they, you know, they would be a pure, a pure plate if they were a software defined thing where you could instantiate Snowflake machinery on the infrastructure of your choice and then put your data into that machinery and get all the benefits of Snowflake. >> So did you see--? >> In other words, if they were not a SAS service, but offered all of the similar benefits of being, you know, if it were a service that you could run on your own infrastructure. >> So did you see what they announced, that--? >> I hope that's making sense. >> It does, did you see what they announced at Dell? They basically announced the ability to take non-native Snowflake data, read it in from an object store on-prem, like a Dell object store. They do the same thing with Pure, read it in, running it in the cloud, and then push it back out. And I was saying to Dell, look, that's fine. Okay, that's interesting. You're taking a materialized view or an extended table, whatever you're doing, wouldn't it be more interesting if you could actually run the query locally with your compute? That would be an extension that would actually get my attention and extend that. >> That is what I'm talking about. That's what I'm talking about. And that's why I'm saying I think Hammerspace is more progressive on that front because with our technology, anybody who can instantiate a service, can make a service. And so I, so MSPs can use Hammerspace as a way to build a super pass layer and host their clients on their infrastructure in a cloud-like fashion. And their clients can have their own private data centers and the MSP or the public clouds, and Hammerspace can be instantiated, get this, by different parties in these different pieces of infrastructure and yet linked together to make a common file system across all of it. >> But this is data mesh. If I were HPE and Dell it's exactly what I'd be doing. I'd be working with Hammerspace to create my own data. I'd work with Databricks, Snowflake, and any other-- >> Data mesh is a good way to put it. Data mesh is a good way to put it. And this is at the lowest level of, you know, the underlying file system that's mountable by the operating system, consumed as a real file system. You can't get lower level than that. That's why this is the foundation for all of the other apps and structured data systems because you need to have a data mesh that can at least mesh the binary blob. >> Okay. >> That hold the binaries and that hold the datasets that those applications are running. >> So David, in the third week of January, we're doing supercloud 2 and I'm trying to convince John Furrier to make it a data slash data mesh edition. I'm slowly getting him to the knothole. I would very much, I mean you're in the Bay Area, I'd very much like you to be one of the headlines. As Zhamak Dehghaniis going to speak, she's the creator of Data Mesh, >> Sure. >> I'd love to have you come into our studio as well, for the live session. If you can't make it, we can pre-record. But you're right there, so I'll get you the dates. >> We'd love to, yeah. No, you can count on it. No, definitely. And you know, we don't typically talk about what we do as Data Mesh. We've been, you know, using global data environment. But, you know, under the covers, that's what the thing is. And so yeah, I think we can frame the discussion like that to line up with other, you know, with the other discussions. >> Yeah, and Data Mesh, of course, is one of those evocative names, but she has come up with some very well defined principles around decentralized data, data as products, self-serve infrastructure, automated governance, and and so forth, which I think your vision plugs right into. And she's brilliant. You'll love meeting her. >> Well, you know, and I think.. Oh, go ahead. Go ahead, Peter. >> Just like to work one other interface which I think is important. How do you see yourself and the open source? You talked about having an operating system. Obviously, Linux is the operating system at one level. How are you imagining that you would interface with cost community as part of this development? >> Well, it's funny you ask 'cause my CTO is the kernel maintainer of the storage networking stack. So how the Linux operating system perceives and consumes networked data at the file system level, the network file system stack is his purview. He owns that, he wrote most of it over the last decade that he's been the maintainer, but he's the gatekeeper of what goes in. And we have leveraged his abilities to enhance Linux to be able to use this decentralized data, in particular with decoupling the control plane driven by metadata from the data access path and the many storage systems on which the data gets accessed. So this factoring, this splitting of control plane from data path, metadata from data, was absolutely necessary to create a data mesh like we're talking about. And to be able to build this supercloud concept. And the highways on which the data runs and the client which knows how to talk to it is all open source. And we have, we've driven the NFS 4.2 spec. The newest NFS spec came from my team. And it was specifically the enhancements needed to be able to build a spanning file system, a data mesh at a file system level. Now that said, our file system itself and our server, our file server, our data orchestration, our data management stuff, that's all closed source, proprietary Hammerspace tech. But the highways on which the mesh connects are actually all open source and the client that knows how to consume it. So we would, honestly, I would welcome competitors using those same highways. They would be at a major disadvantage because we kind of built them, but it would still be very validating and I think only increase the potential adoption rate by more than whatever they might take of the market. So it'd actually be good to split the market with somebody else to come in and share those now super highways for how to mesh data at the file system level, you know, in here. So yeah, hopefully that answered your question. Does that answer the question about how we embrace the open source? >> Right, and there was one other, just that my last one is how do you enable something to run in every environment? And if we take the edge, for example, as being, as an environment which is much very, very compute heavy, but having a lot less capability, how do you do a hold? >> Perfect question. Perfect question. What we do today is a software appliance. We are using a Linux RHEL 8, RHEL 8 equivalent or a CentOS 8, or it's, you know, they're all roughly equivalent. But we have bundled and a software appliance which can be instantiated on bare metal hardware on any type of VM system from VMware to all of the different hypervisors in the Linux world, to even Nutanix and such. So it can run in any virtualized environment and it can run on any cloud instance, server instance in the cloud. And we have it packaged and deployable from the marketplaces within the different clouds. So you can literally spin it up at the click of an API in the cloud on instances in the cloud. So with all of these together, you can basically instantiate a Hammerspace set of machinery that can offer up this file system mesh. like we've been using the terminology we've been using now, anywhere. So it's like being able to take and spin up Snowflake and then just be able to install and run some VMs anywhere you want and boom, now you have a Snowflake service. And by the way, it is so complete that some of our customers, I would argue many aren't even using public clouds at all, they're using this just to run their own data centers in a cloud-like fashion, you know, where they have a data service that can span it all. >> Yeah and to Molly's first point, we would consider that, you know, cloud. Let me put you on the spot. If you had to describe conceptually without a chalkboard what an architectural diagram would look like for supercloud, what would you say? >> I would say it's to have the same runtime environment within every data center and defining that runtime environment as what it takes to schedule the execution of applications, so job scheduling, runtime stuff, and here we're talking Kubernetes, Slurm, other things that do job scheduling. We're talking about having a common way to, you know, instantiate compute resources. So a global compute environment, having a common compute environment where you can instantiate things that need computing. Okay? So that's the first part. And then the second is the data platform where you can have file block and object volumes, and have them available with the same APIs in each of these distributed data centers and have the exact same data omnipresent with the ability to control where the data is from one moment to the next, local, where all the data is instantiate. So my definition would be a common runtime environment that's bifurcate-- >> Oh. (attendees chuckling) We just lost them at the money slide. >> That's part of the magic makes people listen. We keep someone on pin and needles waiting. (attendees chuckling) >> That's good. >> Are you back, David? >> I'm on the edge of my seat. Common runtime environment. It was like... >> And just wait, there's more. >> But see, I'm maybe hyper-focused on the lower level of what it takes to host and run applications. And that's the stuff to schedule what resources they need to run and to get them going and to get them connected through to their persistence, you know, and their data. And to have that data available in all forms and have it be the same data everywhere. On top of that, you could then instantiate applications of different types, including relational databases, and data warehouses and such. And then you could say, now I've got, you know, now I've got these more application-level or structured data-level things. I tend to focus less on that structured data level and the application level and am more focused on what it takes to host any of them generically on that super pass layer. And I'll admit, I'm maybe hyper-focused on the pass layer and I think it's valid to include, you know, higher levels up the stack like the structured data level. But as soon as you go all the way up to like, you know, a very specific SAS service, I don't know that you would call that supercloud. >> Well, and that's the question, is there value? And Marianna Tessel from Intuit said, you know, we looked at it, we did it, and it just, it was actually negative value for us because connecting to all these separate clouds was a real pain in the neck. Didn't bring us any additional-- >> Well that's 'cause they don't have this pass layer underneath it so they can't even shop around, which actually makes it hard to stand up your own SAS service. And ultimately they end up having to build their own infrastructure. Like, you know, I think there's been examples like Netflix moving away from the cloud to their own infrastructure. Basically, if you're going to rent it for more than a few months, it makes sense to build it yourself, if it's at any kind of scale. >> Yeah, for certain components of that cloud. But if the Goldman Sachs came to you, David, and said, "Hey, we want to collaborate and we want to build "out a cloud and essentially build our SAS system "and we want to do that with Hammerspace, "and we want to tap the physical infrastructure "of not only our data centers but all the clouds," then that essentially would be a SAS, would it not? And wouldn't that be a Super SAS or a supercloud? >> Well, you know, what they may be using to build their service is a supercloud, but their service at the end of the day is just a SAS service with global reach. Right? >> Yeah. >> You know, look at, oh shoot. What's the name of the company that does? It has a cloud for doing bookkeeping and accounting. I forget their name, net something. NetSuite. >> NetSuite. NetSuite, yeah, Oracle. >> Yeah. >> Yep. >> Oracle acquired them, right? Is NetSuite a supercloud or is it just a SAS service? You know? I think under the covers you might ask are they using supercloud under the covers so that they can run their SAS service anywhere and be able to shop the venue, get elasticity, get all the benefits of cloud in the, to the benefit of their service that they're offering? But you know, folks who consume the service, they don't care because to them they're just connecting to some endpoint somewhere and they don't have to care. So the further up the stack you go, the more location-agnostic it is inherently anyway. >> And I think it's, paths is really the critical layer. We thought about IAS Plus and we thought about SAS Minus, you know, Heroku and hence, that's why we kind of got caught up and included it. But SAS, I admit, is the hardest one to crack. And so maybe we exclude that as a deployment model. >> That's right, and maybe coming down a level to saying but you can have a structured data supercloud, so you could still include, say, Snowflake. Because what Snowflake is doing is more general purpose. So it's about how general purpose it is. Is it hosting lots of other applications or is it the end application? Right? >> Yeah. >> So I would argue general purpose nature forces you to go further towards platform down-stack. And you really need that general purpose or else there is no real distinguishing. So if you want defensible turf to say supercloud is something different, I think it's important to not try to wrap your arms around SAS in the general sense. >> Yeah, and we've kind of not really gone, leaned hard into SAS, we've just included it as a deployment model, which, given the constraints that you just described for structured data would apply if it's general purpose. So David, super helpful. >> Had it sign. Define the SAS as including the hybrid model hold SAS. >> Yep. >> Okay, so with your permission, I'm going to add you to the list of contributors to the definition. I'm going to add-- >> Absolutely. >> I'm going to add this in. I'll share with Molly. >> Absolutely. >> We'll get on the calendar for the date. >> If Molly can share some specific language that we've been putting in that kind of goes to stuff we've been talking about, so. >> Oh, great. >> I think we can, we can share some written kind of concrete recommendations around this stuff, around the general purpose, nature, the common data thing and yeah. >> Okay. >> Really look forward to it and would be glad to be part of this thing. You said it's in February? >> It's in January, I'll let Molly know. >> Oh, January. >> What the date is. >> Excellent. >> Yeah, third week of January. Third week of January on a Tuesday, whatever that is. So yeah, we would welcome you in. But like I said, if it doesn't work for your schedule, we can prerecord something. But it would be awesome to have you in studio. >> I'm sure with this much notice we'll be able to get something. Let's make sure we have the dates communicated to Molly and she'll get my admin to set it up outside so that we have it. >> I'll get those today to you, Molly. Thank you. >> By the way, I am so, so pleased with being able to work with you guys on this. I think the industry needs it very bad. They need something to break them out of the box of their own mental constraints of what the cloud is versus what it's supposed to be. And obviously, the more we get people to question their reality and what is real, what are we really capable of today that then the more business that we're going to get. So we're excited to lend the hand behind this notion of supercloud and a super pass layer in whatever way we can. >> Awesome. >> Can I ask you whether your platforms include ARM as well as X86? >> So we have not done an ARM port yet. It has been entertained and won't be much of a stretch. >> Yeah, it's just a matter of time. >> Actually, entertained doing it on behalf of NVIDIA, but it will absolutely happen because ARM in the data center I think is a foregone conclusion. Well, it's already there in some cases, but not quite at volume. So definitely will be the case. And I'll tell you where this gets really interesting, discussion for another time, is back to my old friend, the SSD, and having SSDs that have enough brains on them to be part of that fabric. Directly. >> Interesting. Interesting. >> Very interesting. >> Directly attached to ethernet and able to create a data mesh global file system, that's going to be really fascinating. Got to run now. >> All right, hey, thanks you guys. Thanks David, thanks Molly. Great to catch up. Bye-bye. >> Bye >> Talk to you soon.

Published Date : Oct 5 2022

SUMMARY :

So my question to you was, they don't have to do it. to starved before you have I believe that the ISVs, especially those the end users you need to So, if I had to take And and I think Ultimately the supercloud or the Snowflake, you know, more narrowly on just the stuff of the point of what you're talking Well, and you know, Snowflake founders, I don't want to speak over So it starts to even blur who's the main gravity is to having and, you know, that's where to be in a, you know, a lot of thought to this. But some of the inside baseball But the truth is-- So one of the things we wrote the fact that you even have that you would not put in as to give you low latency access the hardest things, David. This is one of the things I've the how can you host applications Not a specific application Yeah, yeah, you just statement when you broke up. So would you exclude is kind of hard to do I know, we all know it is. I think I said to Slootman, of ways you can give it So again, in the spirit But I could use your to allowing you to run anything anywhere So it comes down to how quality that you would expect and how true up you are to that concept. you don't have to draw, yeah. the ability for you and get all the benefits of Snowflake. of being, you know, if it were a service They do the same thing and the MSP or the public clouds, to create my own data. for all of the other apps and that hold the datasets So David, in the third week of January, I'd love to have you come like that to line up with other, you know, Yeah, and Data Mesh, of course, is one Well, you know, and I think.. and the open source? and the client which knows how to talk and then just be able to we would consider that, you know, cloud. and have the exact same data We just lost them at the money slide. That's part of the I'm on the edge of my seat. And that's the stuff to schedule Well, and that's the Like, you know, I think But if the Goldman Sachs Well, you know, what they may be using What's the name of the company that does? NetSuite, yeah, Oracle. So the further up the stack you go, But SAS, I admit, is the to saying but you can have a So if you want defensible that you just described Define the SAS as including permission, I'm going to add you I'm going to add this in. We'll get on the calendar to stuff we've been talking about, so. nature, the common data thing and yeah. to it and would be glad to have you in studio. and she'll get my admin to set it up I'll get those today to you, Molly. And obviously, the more we get people So we have not done an ARM port yet. because ARM in the data center I think is Interesting. that's going to be really fascinating. All right, hey, thanks you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

SlootmanPERSON

0.99+

NetflixORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

MollyPERSON

0.99+

Marianna TesselPERSON

0.99+

DellORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

FrankPERSON

0.99+

DisneyORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

IBMORGANIZATION

0.99+

JanuaryDATE

0.99+

John FurrierPERSON

0.99+

FebruaryDATE

0.99+

PeterPERSON

0.99+

Zhamak DehghaniisPERSON

0.99+

HammerspaceORGANIZATION

0.99+

WordTITLE

0.99+

AWSORGANIZATION

0.99+

RHEL 8TITLE

0.99+

OracleORGANIZATION

0.99+

BenoitPERSON

0.99+

ExcelTITLE

0.99+

secondQUANTITY

0.99+

AutodeskORGANIZATION

0.99+

CentOS 8TITLE

0.99+

David FlynnPERSON

0.99+

oneQUANTITY

0.99+

DatabricksORGANIZATION

0.99+

HPEORGANIZATION

0.99+

PowerPointTITLE

0.99+

first pointQUANTITY

0.99+

bothQUANTITY

0.99+

TuesdayDATE

0.99+

SnowflakeORGANIZATION

0.99+

first partQUANTITY

0.99+

todayDATE

0.99+

each regionQUANTITY

0.98+

LinuxTITLE

0.98+

OneQUANTITY

0.98+

IntuitORGANIZATION

0.98+

Tim Burners LeePERSON

0.98+

Zhamak Dehghaniis'PERSON

0.98+

Blue OriginORGANIZATION

0.98+

Bay AreaLOCATION

0.98+

two reasonsQUANTITY

0.98+

eachQUANTITY

0.98+

one applicationQUANTITY

0.98+

SnowflakeTITLE

0.98+

firstQUANTITY

0.98+

more than a few monthsQUANTITY

0.97+

SASORGANIZATION

0.97+

ARMORGANIZATION

0.97+

MicrosoftORGANIZATION

0.97+

KubeCon + CloudNativeCon 2022 Preview w/ @Stu


 

>>Keon Cloud Native Con kicks off in Detroit on October 24th, and we're pleased to have Stewart Miniman, who's the director of Market Insights, hi, at, for hybrid platforms at Red Hat back in the studio to help us understand the key trends to look for at the events. Do welcome back, like old, old, old >>Home. Thank you, David. It's great to, great to see you and always love doing these previews, even though Dave, come on. How many years have I told you Cloud native con, It's a hoodie crowd. They're gonna totally call you out for where in a tie and things like that. I, I know you want to be an ESPN sportscaster, but you know, I I, I, I still don't think even after, you know, this show's been around for so many years that there's gonna be too many ties into Troy. I >>Know I left the hoodie in my off, I'm sorry folks, but hey, we'll just have to go for it. Okay. Containers generally, and Kubernetes specifically continue to show very strong spending momentum in the ETR survey data. So let's bring up this slide that shows the ETR sectors, all the sectors in the tax taxonomy with net score or spending velocity in the vertical axis and pervasiveness on the horizontal axis. Now, that red dotted line that you see, that marks the elevated 40% mark, anything above that is considered highly elevated in terms of momentum. Now, for years, the big four areas of momentum that shine above all the rest have been cloud containers, rpa, and ML slash ai for the first time in 10 quarters, ML and AI and RPA have dropped below the 40% line, leaving only cloud and containers in rarefied air. Now, Stu, I'm sure this data doesn't surprise you, but what do you make of this? >>Yeah, well, well, Dave, I, I did an interview with at Deepak who owns all the container and open source activity at Amazon earlier this year, and his comment was, the default deployment mechanism in Amazon is containers. So when I look at your data and I see containers and cloud going in sync, yeah, that, that's, that's how we see things. We're helping lots of customers in their overall adoption. And this cloud native ecosystem is still, you know, we're still in that Cambridge explosion of new projects, new opportunities, AI's a great workload for these type type of technologies. So it's really becoming pervasive in the marketplace. >>And, and I feel like the cloud and containers go hand in hand, so it's not surprising to see those two above >>The 40%. You know, there, there's nothing to say that, Look, can I run my containers in my data center and not do the public cloud? Sure. But in the public cloud, the default is the container. And one of the hot discussions we've been having in this ecosystem for a number of years is edge computing. And of course, you know, I want something that that's small and lightweight and can do things really fast. A lot of times it's an AI workload out there, and containers is a great fit at the edge too. So wherever it goes, containers is a good fit, which has been keeping my group at Red Hat pretty busy. >>So let's talk about some of those high level stats that we put together and preview for the event. So it's really around the adoption of open source software and Kubernetes. Here's, you know, a few fun facts. So according to the state of enterprise open source report, which was published by Red Hat, although it was based on a blind survey, nobody knew that that Red Hat was, you know, initiating it. 80% of IT execs expect to increase their use of enterprise open source software. Now, the CNCF community has currently more than 120,000 developers. That's insane when you think about that developer resource. 73% of organizations in the most recent CNCF annual survey are using Kubernetes. Now, despite the momentum, according to that same Red Hat survey, adoption barriers remain for some organizations. Stu, I'd love you to talk about this specifically around skill sets, and then we've highlighted some of the other trends that we expect to see at the event around Stu. I'd love to, again, your, get your thoughts on the preview. You've done a number of these events, automation, security, governance, governance at scale, edge deployments, which you just mentioned among others. Now Kubernetes is eight years old, and I always hear people talking about there's something coming beyond Kubernetes, but it looks like we're just getting started. Yeah, >>Dave, It, it is still relatively early days. The CMC F survey, I think said, you know, 96% of companies when they, when CMC F surveyed them last year, were either deploying Kubernetes or had plans to deploy it. But when I talked to enterprises, nobody has said like, Hey, we've got every group on board and all of our applications are on. It is a multi-year journey for most companies and plenty of them. If you, you look at the general adoption of technology, we're still working through kind of that early majority. We, you know, passed the, the chasm a couple of years ago. But to a point, you and I we're talking about this ecosystem, there are plenty of people in this ecosystem that could care less about containers and Kubernetes. Lots of conversations at this show won't even talk about Kubernetes. You've got, you know, big security group that's in there. >>You've got, you know, certain workloads like we talked about, you know, AI and ml and that are in there. And automation absolutely is playing a, a good role in what's going on here. So in some ways, Kubernetes kind of takes a, a backseat because it is table stakes at this point. So lots of people involved in it, lots of activities still going on. I mean, we're still at a cadence of three times a year now. We slowed it down from four times a year as an industry, but there's, there's still lots of innovation happening, lots of adoption, and oh my gosh, Dave, I mean, there's just no shortage of new projects and new people getting involved. And what's phenomenal about it is there's, you know, end user practitioners that aren't just contributing. But many of the projects were spawned out of work by the likes of Intuit and Spotify and, and many others that created some of the projects that sit alongside or above the, the, you know, the container orchestration itself. >>So before we talked about some of that, it's, it's kind of interesting. It's like Kubernetes is the big dog, right? And it's, it's kind of maturing after, you know, eight years, but it's still important. I wanna share another data point that underscores the traction that containers generally are getting in Kubernetes specifically have, So this is data from the latest ETR survey and shows the spending breakdown for Kubernetes in the ETR data set for it's cut for respondents with 50 or more citations in, in by the IT practitioners that lime green is new adoptions, the forest green is spending 6% or more relative to last year. The gray is flat spending year on year, and those little pink bars, that's 6% or down spending, and the bright red is retirements. So they're leaving the platform. And the blue dots are net score, which is derived by subtracting the reds from the greens. And the yellow dots are pervasiveness in the survey relative to the sector. So the big takeaway here is that there is virtually no red, essentially zero churn across all sectors, large companies, public companies, private firms, telcos, finance, insurance, et cetera. So again, sometimes I hear this things beyond Kubernetes, you've mentioned several, but it feels like Kubernetes is still a driving force, but a lot of other projects around Kubernetes, which we're gonna hear about at the show. >>Yeah. So, so, so Dave, right? First of all, there was for a number of years, like, oh wait, you know, don't waste your time on, on containers because serverless is gonna rule the world. Well, serverless is now a little bit of a broader term. Can I do a serverless viewpoint for my developers that they don't need to think about the infrastructure but still have containers underneath it? Absolutely. So our friends at Amazon have a solution called Fargate, their proprietary offering to kind of hide that piece of it. And in the open source world, there's a project called Can Native, I think it's the second or third can Native Con's gonna happen at the cncf. And even if you use this, I can still call things over on Lambda and use some of those functions. So we know Dave, it is additive and nothing ever dominates the entire world and nothing ever dies. >>So we have, we have a long runway of activities still to go on in containers and Kubernetes. We're always looking for what that next thing is. And what's great about this ecosystem is most of it tends to be additive and plug into the pieces there, there's certain tools that, you know, span beyond what can happen in the container world and aren't limited to it. And there's others that are specific for it. And to talk about the industries, Dave, you know, I love, we we have, we have a community event that we run that's gonna happen at Cubans called OpenShift Commons. And when you look at like, who's speaking there? Oh, we've got, you know, for Lockheed Martin, University of Michigan and I g Bank all speaking there. So you look and it's like, okay, cool, I've got automotive, I've got, you know, public sector, I've got, you know, university education and I've got finance. So all of you know, there is not an industry that is not touched by this. And the general wave of software adoption is the reason why, you know, not just adoption, but the creation of new software is one of the differentiators for companies. And that is what, that's the reason why I do containers, isn't because it's some cool technology and Kubernetes is great to put on my resume, but that it can actually accelerate my developers and help me create technology that makes me respond to my business and my ultimate end users. Well, >>And you know, as you know, we've been talking about the Supercloud a lot and the Kubernetes is clearly enabler to, to Supercloud, but I wanted to go back, you and John Furrier have done so many of, you know, the, the cube cons, but but go back to Docker con before Kubernetes was even a thing. And so you sort of saw this, you know, grow. I think there's what, how many projects are in CNCF now? I mean, hundreds. Hundreds, okay. And so you're, Will we hear things in Detroit, things like, you know, new projects like, you know, Argo and capabilities around SI store and things like that? Well, you're gonna hear a lot about that. Or is it just too much to cover? >>So I, I mean the, the good news, Dave, is that the CNCF really is, is a good steward for this community and new things got in get in. So there's so much going on with the existing projects that some of the new ones sometimes have a little bit of a harder time making a little bit of buzz. One of the more interesting ones is a project that's been around for a while that I think back to the first couple of Cube Cuban that John and I did service Mesh and Istio, which was created by Google, but lived under basically a, I guess you would say a Google dominated governance for a number of years is now finally under the CNCF Foundation. So I talked to a number of companies over the years and definitely many of the contributors over the years that didn't love that it was a Google Run thing, and now it is finally part. >>So just like Kubernetes is, we have SEO and also can Native that I mentioned before also came outta Google and those are all in the cncf. So will there be new projects? Yes. The CNCF is sometimes they, they do matchmaking. So in some of the observability space, there were a couple of projects that they said, Hey, maybe you can go merge down the road. And they ended up doing that. So there's still you, you look at all these projects and if I was an end user saying, Oh my God, there is so much change and so many projects, you know, I can't spend the time in the effort to learn about all of these. And that's one of the challenges and something obviously at Red Hat, we spend a lot of time figuring out, you know, not to make winners, but which are the things that customers need, Where can we help make them run in production for our, our customers and, and help bring some stability and a little bit of security for the overall ecosystem. >>Well, speaking of security, security and, and skill sets, we've talked about those two things and they sort of go hand in hand when I go to security events. I mean, we're at reinforced last summer, we were just recently at the CrowdStrike event. A lot of the discussion is sort of best practice because it's so complicated. And, and, and will you, I presume you're gonna hear a lot of that here because security securing containers now, you know, the whole shift left thing and shield right is, is a complicated matter, especially when you saw with the earlier data from the Red Hat survey, the the gaps are around skill sets. People don't have the skill. So should we expect to hear a lot about that, A lot of sort of how to, how to take advantage of some of these new capabilities? >>Yeah, Dave, absolutely. So, you know, one of the conversations going on in the community right now is, you know, has DevOps maybe played out as we expect to see it? There's a newer term called platform engineering, and how much do I need to do there? Something that I, I know your, your team's written a lot about Dave, is how much do you need to know versus what can you shift to just a platform or a service that I can consume? I've talked a number of times with you since I've been at Red Hat about the cloud services that we offer. So you want to use our offering in the public cloud. Our first recommendation is, hey, we've got cloud services, how much Kubernetes do you really want to learn versus you want to do what you can build on top of it, modernize the pieces and have less running the plumbing and electric and more, you know, taking advantage of the, the technologies there. So that's a big thing we've seen, you know, we've got a big SRE team that can manage that for use so that you have to spend less time worrying about what really is un differentiated heavy lifting and spend more time on what's important to your business and your >>Customers. So, and that's, and that's through a managed service. >>Yeah, absolutely. >>That whole space is just taken off. All right, Stu I'll give you the final word. You know, what are you excited about for, for, for this upcoming event and Detroit? Interesting choice of venue? Yeah, >>Look, first of off, easy flight. I've, I've never been to Detroit, so I'm, I'm willing to give it a shot and hopefully, you know, that awesome airport. There's some, some, some good things there to learn. The show itself is really a choose your own adventure because there's so much going on. The main show of QAN and cloud Native Con is Wednesday through Friday, but a lot of a really interesting stuff happens on Monday and Tuesday. So we talked about things like OpenShift Commons in the security space. There's cloud Native Security Day, which is actually two days and a SIG store event. There, there's a get up show, there's, you know, k native day. There's so many things that if you want to go deep on a topic, you can go spend like a workshop in some of those you can get hands on to. And then at the show itself, there's so much, and again, you can learn from your peers. >>So it was good to see we had, during the pandemic, it tilted a little bit more vendor heavy because I think most practitioners were pretty busy focused on what they could work on and less, okay, hey, I'm gonna put together a presentation and maybe I'm restricted at going to a show. Yeah, not, we definitely saw that last year when I went to LA I was disappointed how few customer sessions there were. It, it's back when I go look through the schedule now there's way more end users sharing their stories and it, it's phenomenal to see that. And the hallway track, Dave, I didn't go to Valencia, but I hear it was really hopping felt way more like it was pre pandemic. And while there's a few people that probably won't come because Detroit, we think there's, what we've heard and what I've heard from the CNCF team is they are expecting a sizable group up there. I know a lot of the hotels right near the, where it's being held are all sold out. So it should be, should be a lot of fun. Good thing I'm speaking on an edge panel. First time I get to be a speaker at the show, Dave, it's kind of interesting to be a little bit of a different role at the show. >>So yeah, Detroit's super convenient, as I said. Awesome. Airports too. Good luck at the show. So it's a full week. The cube will be there for three days, Tuesday, Wednesday, Thursday. Thanks for coming. >>Wednesday, Thursday, Friday, sorry, >>Wednesday, Thursday, Friday is the cube, right? So thank you for that. >>And, and no ties from the host, >>No ties, only hoodies. All right Stu, thanks. Appreciate you coming in. Awesome. And thank you for watching this preview of CubeCon plus cloud Native Con with at Stu, which again starts the 24th of October, three days of broadcasting. Go to the cube.net and you can see all the action. We'll see you there.

Published Date : Oct 4 2022

SUMMARY :

Red Hat back in the studio to help us understand the key trends to look for at the events. I know you want to be an ESPN sportscaster, but you know, I I, I, I still don't think even Now, that red dotted line that you And this cloud native ecosystem is still, you know, we're still in that Cambridge explosion And of course, you know, I want something that that's small and lightweight and Here's, you know, a few fun facts. I think said, you know, 96% of companies when they, when CMC F surveyed them last year, You've got, you know, certain workloads like we talked about, you know, AI and ml and that And it's, it's kind of maturing after, you know, eight years, but it's still important. oh wait, you know, don't waste your time on, on containers because serverless is gonna rule the world. And the general wave of software adoption is the reason why, you know, And you know, as you know, we've been talking about the Supercloud a lot and the Kubernetes is clearly enabler to, to Supercloud, definitely many of the contributors over the years that didn't love that it was a Google Run the observability space, there were a couple of projects that they said, Hey, maybe you can go merge down the road. securing containers now, you know, the whole shift left thing and shield right is, So, you know, one of the conversations going on in the community right now is, So, and that's, and that's through a managed service. All right, Stu I'll give you the final word. There, there's a get up show, there's, you know, k native day. I know a lot of the hotels right near the, where it's being held are all sold out. Good luck at the show. So thank you for that. Go to the cube.net and you can see all the action.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Lockheed MartinORGANIZATION

0.99+

6%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

DetroitLOCATION

0.99+

50QUANTITY

0.99+

CNCFORGANIZATION

0.99+

October 24thDATE

0.99+

40%QUANTITY

0.99+

Stewart MinimanPERSON

0.99+

FridayDATE

0.99+

GoogleORGANIZATION

0.99+

96%QUANTITY

0.99+

two daysQUANTITY

0.99+

University of MichiganORGANIZATION

0.99+

StuPERSON

0.99+

CMC FORGANIZATION

0.99+

80%QUANTITY

0.99+

TuesdayDATE

0.99+

JohnPERSON

0.99+

WednesdayDATE

0.99+

eight yearsQUANTITY

0.99+

MondayDATE

0.99+

last yearDATE

0.99+

three daysQUANTITY

0.99+

Red HatORGANIZATION

0.99+

secondQUANTITY

0.99+

73%QUANTITY

0.99+

ThursdayDATE

0.99+

LALOCATION

0.99+

more than 120,000 developersQUANTITY

0.99+

two thingsQUANTITY

0.99+

John FurrierPERSON

0.99+

hundredsQUANTITY

0.99+

HundredsQUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.99+

24th of OctoberDATE

0.99+

oneQUANTITY

0.98+

KubeConEVENT

0.98+

CubeConEVENT

0.98+

CNCF FoundationORGANIZATION

0.98+

cube.netOTHER

0.98+

last summerDATE

0.98+

ValenciaLOCATION

0.98+

thirdQUANTITY

0.98+

SpotifyORGANIZATION

0.98+

IntuitORGANIZATION

0.98+

last yearDATE

0.98+

OneQUANTITY

0.98+

cloud Native Security DayEVENT

0.97+

KubernetesTITLE

0.97+

QANEVENT

0.97+

ESPNORGANIZATION

0.97+

Power Panel: Does Hardware Still Matter


 

(upbeat music) >> The ascendancy of cloud and SAS has shown new light on how organizations think about, pay for, and value hardware. Once sought after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays, and maximizing server utilization has been superseded by demand for cloud architects, DevOps pros, developers with expertise in microservices, container, application development, and like. Even a company like Dell, the largest hardware company in enterprise tech touts that it has more software engineers than those working in hardware. Begs the question, is hardware going the way of Coball? Well, not likely. Software has to run on something, but the labor needed to deploy, and troubleshoot, and manage hardware infrastructure is shifting. At the same time, we've seen the value flow also shifting in hardware. Once a world dominated by X86 processors value is flowing to alternatives like Nvidia and arm based designs. Moreover, other componentry like NICs, accelerators, and storage controllers are becoming more advanced, integrated, and increasingly important. The question is, does it matter? And if so, why does it matter and to whom? What does it mean to customers, workloads, OEMs, and the broader society? Hello and welcome to this week's Wikibon theCUBE Insights powered by ETR. In this breaking analysis, we've organized a special power panel of industry analysts and experts to address the question, does hardware still matter? Allow me to introduce the panel. Bob O'Donnell is president and chief analyst at TECHnalysis Research. Zeus Kerravala is the founder and principal analyst at ZK Research. David Nicholson is a CTO and tech expert. Keith Townson is CEO and founder of CTO Advisor. And Marc Staimer is the chief dragon slayer at Dragon Slayer Consulting and oftentimes a Wikibon contributor. Guys, welcome to theCUBE. Thanks so much for spending some time here. >> Good to be here. >> Thanks. >> Thanks for having us. >> Okay before we get into it, I just want to bring up some data from ETR. This is a survey that ETR does every quarter. It's a survey of about 1200 to 1500 CIOs and IT buyers and I'm showing a subset of the taxonomy here. This XY axis and the vertical axis is something called net score. That's a measure of spending momentum. It's essentially the percentage of customers that are spending more on a particular area than those spending less. You subtract the lesses from the mores and you get a net score. Anything the horizontal axis is pervasion in the data set. Sometimes they call it market share. It's not like IDC market share. It's just the percentage of activity in the data set as a percentage of the total. That red 40% line, anything over that is considered highly elevated. And for the past, I don't know, eight to 12 quarters, the big four have been AI and machine learning, containers, RPA and cloud and cloud of course is very impressive because not only is it elevated in the vertical access, but you know it's very highly pervasive on the horizontal. So what I've done is highlighted in red that historical hardware sector. The server, the storage, the networking, and even PCs despite the work from home are depressed in relative terms. And of course, data center collocation services. Okay so you're seeing obviously hardware is not... People don't have the spending momentum today that they used to. They've got other priorities, et cetera, but I want to start and go kind of around the horn with each of you, what is the number one trend that each of you sees in hardware and why does it matter? Bob O'Donnell, can you please start us off? >> Sure Dave, so look, I mean, hardware is incredibly important and one comment first I'll make on that slide is let's not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It's just a little bit more stable. It's not as subject to big jumps as we see certainly in other software areas. But look, the important thing that's happening in hardware is the diversification of the types of chip architectures we're seeing and how and where they're being deployed, right? You refer to this in your opening. We've moved from a world of x86 CPUs from Intel and AMD to things like obviously GPUs, DPUs. We've got VPU for, you know, computer vision processing. We've got AI-dedicated accelerators, we've got all kinds of other network acceleration tools and AI-powered tools. There's an incredible diversification of these chip architectures and that's been happening for a while but now we're seeing them more widely deployed and it's being done that way because workloads are evolving. The kinds of workloads that we're seeing in some of these software areas require different types of compute engines than traditionally we've had. The other thing is (coughs), excuse me, the power requirements based on where geographically that compute happens is also evolving. This whole notion of the edge, which I'm sure we'll get into a little bit more detail later is driven by the fact that where the compute actually sits closer to in theory the edge and where edge devices are, depending on your definition, changes the power requirements. It changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures. And that's a very long-term trend that I think we're going to continue to see play out through this decade and well into the 2030s as well. >> Excellent, great, great points. Thank you, Bob. Zeus up next, please. >> Yeah, and I think the other thing when you look at this chart to remember too is, you know, through the pandemic and the work from home period a lot of companies did put their office modernization projects on hold and you heard that echoed, you know, from really all the network manufacturers anyways. They always had projects underway to upgrade networks. They put 'em on hold. Now that people are starting to come back to the office, they're looking at that now. So we might see some change there, but Bob's right. The size of those market are quite a bit different. I think the other big trend here is the hardware companies, at least in the areas that I look at networking are understanding now that it's a combination of hardware and software and silicon that works together that creates that optimum type of performance and experience, right? So some things are best done in silicon. Some like data forwarding and things like that. Historically when you look at the way network devices were built, you did everything in hardware. You configured in hardware, they did all the data for you, and did all the management. And that's been decoupled now. So more and more of the control element has been placed in software. A lot of the high-performance things, encryption, and as I mentioned, data forwarding, packet analysis, stuff like that is still done in hardware, but not everything is done in hardware. And so it's a combination of the two. I think, for the people that work with the equipment as well, there's been more shift to understanding how to work with software. And this is a mistake I think the industry made for a while is we had everybody convinced they had to become a programmer. It's really more a software power user. Can you pull things out of software? Can you through API calls and things like that. But I think the big frame here is, David, it's a combination of hardware, software working together that really make a difference. And you know how much you invest in hardware versus software kind of depends on the performance requirements you have. And I'll talk about that later but that's really the big shift that's happened here. It's the vendors that figured out how to optimize performance by leveraging the best of all of those. >> Excellent. You guys both brought up some really good themes that we can tap into Dave Nicholson, please. >> Yeah, so just kind of picking up where Bob started off. Not only are we seeing the rise of a variety of CPU designs, but I think increasingly the connectivity that's involved from a hardware perspective, from a kind of a server or service design perspective has become increasingly important. I think we'll get a chance to look at this in more depth a little bit later but when you look at what happens on the motherboard, you know we're not in so much a CPU-centric world anymore. Various application environments have various demands and you can meet them by using a variety of components. And it's extremely significant when you start looking down at the component level. It's really important that you optimize around those components. So I guess my summary would be, I think we are moving out of the CPU-centric hardware model into more of a connectivity-centric model. We can talk more about that later. >> Yeah, great. And thank you, David, and Keith Townsend I really interested in your perspectives on this. I mean, for years you worked in a data center surrounded by hardware. Now that we have the software defined data center, please chime in here. >> Well, you know, I'm going to dig deeper into that software-defined data center nature of what's happening with hardware. Hardware is meeting software infrastructure as code is a thing. What does that code look like? We're still trying to figure out but servicing up these capabilities that the previous analysts have brought up, how do I ensure that I can get the level of services needed for the applications that I need? Whether they're legacy, traditional data center, workloads, AI ML, workloads, workloads at the edge. How do I codify that and consume that as a service? And hardware vendors are figuring this out. HPE, the big push into GreenLake as a service. Dale now with Apex taking what we need, these bare bone components, moving it forward with DDR five, six CXL, et cetera, and surfacing that as cold or as services. This is a very tough problem. As we transition from consuming a hardware-based configuration to this infrastructure as cold paradigm shift. >> Yeah, programmable infrastructure, really attacking that sort of labor discussion that we were having earlier, okay. Last but not least Marc Staimer, please. >> Thanks, Dave. My peers raised really good points. I agree with most of them, but I'm going to disagree with the title of this session, which is, does hardware matter? It absolutely matters. You can't run software on the air. You can't run it in an ephemeral cloud, although there's the technical cloud and that's a different issue. The cloud is kind of changed everything. And from a market perspective in the 40 plus years I've been in this business, I've seen this perception that hardware has to go down in price every year. And part of that was driven by Moore's law. And we're coming to, let's say a lag or an end, depending on who you talk to Moore's law. So we're not doubling our transistors every 18 to 24 months in a chip and as a result of that, there's been a higher emphasis on software. From a market perception, there's no penalty. They don't put the same pressure on software from the market to reduce the cost every year that they do on hardware, which kind of bass ackwards when you think about it. Hardware costs are fixed. Software costs tend to be very low. It's kind of a weird thing that we do in the market. And what's changing is we're now starting to treat hardware like software from an OPEX versus CapEx perspective. So yes, hardware matters. And we'll talk about that more in length. >> You know, I want to follow up on that. And I wonder if you guys have a thought on this, Bob O'Donnell, you and I have talked about this a little bit. Marc, you just pointed out that Moore's laws could have waning. Pat Gelsinger recently at their investor meeting said that he promised that Moore's law is alive and well. And the point I made in breaking analysis was okay, great. You know, Pat said, doubling transistors every 18 to 24 months, let's say that Intel can do that. Even though we know it's waning somewhat. Look at the M1 Ultra from Apple (chuckles). In about 15 months increased transistor density on their package by 6X. So to your earlier point, Bob, we have this sort of these alternative processors that are really changing things. And to Dave Nicholson's point, there's a whole lot of supporting components as well. Do you have a comment on that, Bob? >> Yeah, I mean, it's a great point, Dave. And one thing to bear in mind as well, not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised the other big point and I think it was Keith that mentioned it. CXL and interconnect on the chip itself is dramatically changing it. And a lot of the more interesting advances that are going to continue to drive Moore's law forward in terms of the way we think about performance, if perhaps not number of transistors per se, is the interconnects that become available. You're seeing the development of chiplets or tiles, people use different names, but the idea is you can have different components being put together eventually in sort of a Lego block style. And what that's also going to allow, not only is that going to give interesting performance possibilities 'cause of the faster interconnect. So you can share, have shared memory between things which for big workloads like AI, huge data sets can make a huge difference in terms of how you talk to memory over a network connection, for example, but not only that you're going to see more diversity in the types of solutions that can be built. So we're going to see even more choices in hardware from a silicon perspective because you'll be able to piece together different elements. And oh, by the way, the other benefit of that is we've reached a point in chip architectures where not everything benefits from being smaller. We've been so focused and so obsessed when it comes to Moore's law, to the size of each individual transistor and yes, for certain architecture types, CPUs and GPUs in particular, that's absolutely true, but we've already hit the point where things like RF for 5g and wifi and other wireless technologies and a whole bunch of other things actually don't get any better with a smaller transistor size. They actually get worse. So the beauty of these chiplet architectures is you could actually combine different chip manufacturing sizes. You know you hear about four nanometer and five nanometer along with 14 nanometer on a single chip, each one optimized for its specific application yet together, they can give you the best of all worlds. And so we're just at the very beginning of that era, which I think is going to drive a ton of innovation. Again, gets back to my comment about different types of devices located geographically different places at the edge, in the data center, you know, in a private cloud versus a public cloud. All of those things are going to be impacted and there'll be a lot more options because of this silicon diversity and this interconnect diversity that we're just starting to see. >> Yeah, David. David Nicholson's got a graphic on that. They're going to show later. Before we do that, I want to introduce some data. I actually want to ask Keith to comment on this before we, you know, go on. This next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware. And you can see the red is they had significant issues and it's most pronounced in laptops and networking hardware on the far right-hand side, but virtually all categories, firewalls, peripheral servers, storage are having moderately difficult procurement issues. That's the sort of pinkish or significant challenges. So Keith, I mean, what are you seeing with your customers in the hardware supply chains and bottlenecks? And you know we're seeing it with automobiles and appliances but so it goes beyond IT. The semiconductor, you know, challenges. What's been the impact on the buyer community and society and do you have any sense as to when it will subside? >> You know, I was just asked this question yesterday and I'm feeling the pain. People question, kind of a side project within the CTO advisor, we built a hybrid infrastructure, traditional IT data center that we're walking with the traditional customer and modernizing that data center. So it was, you know, kind of a snapshot of time in 2016, 2017, 10 gigabit, ARISTA switches, some older Dell's 730 XD switches, you know, speeds and feeds. And we said we would modern that with the latest Intel stack and connected to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges. I thought we'd easily migrate from 10 gig networking to 25 gig networking path that customers are going on. The 10 gig network switches that I bought used are now double the price because you can't get legacy 10 gig network switches because all of the manufacturers are focusing on the more profitable 25 gig for capacity, even the 25 gig switches. And we're focused on networking right now. It's hard to procure. We're talking about nine to 12 months or more lead time. So we're seeing customers adjust by adopting cloud. But if you remember early on in the pandemic, Microsoft Azure kind of gated customers that didn't have a capacity agreement. So customers are keeping an eye on that. There's a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware VP or some other virtualization technology where it doesn't matter who can get me the hardware, they can just get me the hardware because it's critically impacting projects and timelines. >> So that's a great setup Zeus for you with Keith mentioned the earlier the software-defined data center with software-defined networking and cloud. Do you see a day where networking hardware is monetized and it's all about the software, or are we there already? >> No, we're not there already. And I don't see that really happening any time in the near future. I do think it's changed though. And just to be clear, I mean, when you look at that data, this is saying customers have had problems procuring the equipment, right? And there's not a network vendor out there. I've talked to Norman Rice at Extreme, and I've talked to the folks at Cisco and ARISTA about this. They all said they could have had blowout quarters had they had the inventory to ship. So it's not like customers aren't buying this anymore. Right? I do think though, when it comes to networking network has certainly changed some because there's a lot more controls as I mentioned before that you can do in software. And I think the customers need to start thinking about the types of hardware they buy and you know, where they're going to use it and, you know, what its purpose is. Because I've talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it's bogged down, right? It just doesn't have the horsepower to run it. And, you know, even when you do that, you have to start thinking of the components you use. The NICs you buy. And I've talked to customers that have simply just gone through the process replacing a NIC card and a commodity box and had some performance problems and, you know, things like that. So if agility is more important than performance, then by all means try running software on commodity hardware. I think that works in some cases. If performance though is more important, that's when you need that kind of turnkey hardware system. And I've actually seen more and more customers reverting back to that model. In fact, when you talk to even some startups I think today about when they come to market, they're delivering things more on appliances because that's what customers want. And so there's this kind of app pivot this pendulum of agility and performance. And if performance absolutely matters, that's when you do need to buy these kind of turnkey, prebuilt hardware systems. If agility matters more, that's when you can go more to software, but the underlying hardware still does matter. So I think, you know, will we ever have a day where you can just run it on whatever hardware? Maybe but I'll long be retired by that point. So I don't care. >> Well, you bring up a good point Zeus. And I remember the early days of cloud, the narrative was, oh, the cloud vendors. They don't use EMC storage, they just run on commodity storage. And then of course, low and behold, you know, they've trot out James Hamilton to talk about all the custom hardware that they were building. And you saw Google and Microsoft follow suit. >> Well, (indistinct) been falling for this forever. Right? And I mean, all the way back to the turn of the century, we were calling for the commodity of hardware. And it's never really happened because you can still drive. As long as you can drive innovation into it, customers will always lean towards the innovation cycles 'cause they get more features faster and things. And so the vendors have done a good job of keeping that cycle up but it'll be a long time before. >> Yeah, and that's why you see companies like Pure Storage. A storage company has 69% gross margins. All right. I want to go jump ahead. We're going to bring up the slide four. I want to go back to something that Bob O'Donnell was talking about, the sort of supporting act. The diversity of silicon and we've marched to the cadence of Moore's law for decades. You know, we asked, you know, is Moore's law dead? We say it's moderating. Dave Nicholson. You want to talk about those supporting components. And you shared with us a slide that shift. You call it a shift from a processor-centric world to a connect-centric world. What do you mean by that? And let's bring up slide four and you can talk to that. >> Yeah, yeah. So first, I want to echo this sentiment that the question does hardware matter is sort of the answer is of course it matters. Maybe the real question should be, should you care about it? And the answer to that is it depends who you are. If you're an end user using an application on your mobile device, maybe you don't care how the architecture is put together. You just care that the service is delivered but as you back away from that and you get closer and closer to the source, someone needs to care about the hardware and it should matter. Why? Because essentially what hardware is doing is it's consuming electricity and dollars and the more efficiently you can configure hardware, the more bang you're going to get for your buck. So it's not only a quantitative question in terms of how much can you deliver? But it also ends up being a qualitative change as capabilities allow for things we couldn't do before, because we just didn't have the aggregate horsepower to do it. So this chart actually comes out of some performance tests that were done. So it happens to be Dell servers with Broadcom components. And the point here was to peel back, you know, peel off the top of the server and look at what's in that server, starting with, you know, the PCI interconnect. So PCIE gen three, gen four, moving forward. What are the effects on from an interconnect versus on performance application performance, translating into new orders per minute, processed per dollar, et cetera, et cetera? If you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance, you can see that CPU architecture is sort of lagging behind in a way. And Bob mentioned this idea of tiling and all of the different ways to get around that. When we do performance testing, we can actually peg CPUs, just running the performance tests without any actual database environments working. So right now we're at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input. So the key thing here what this is highlighting is just as a very specific example, you take a card that's designed as a gen three PCIE device, and you plug it into a gen four slot. Now the card is the bottleneck. You plug a gen four card into a gen four slot. Now the gen four slot is the bottleneck. So we're constantly chasing these bottlenecks. Someone has to be focused on that from an architectural perspective, it's critically important. So there's no question that it matters. But of course, various people in this food chain won't care where it comes from. I guess a good analogy might be, where does our food come from? If I get a steak, it's a pink thing wrapped in plastic, right? Well, there are a lot of inputs that a lot of people have to care about to get that to me. Do I care about all of those things? No. Are they important? They're critically important. >> So, okay. So all I want to get to the, okay. So what does this all mean to customers? And so what I'm hearing from you is to balance a system it's becoming, you know, more complicated. And I kind of been waiting for this day for a long time, because as we all know the bottleneck was always the spinning disc, the last mechanical. So people who wrote software knew that when they were doing it right, the disc had to go and do stuff. And so they were doing other things in the software. And now with all these new interconnects and flash and things like you could do atomic rights. And so that opens up new software possibilities and combine that with alternative processes. But what's the so what on this to the customer and the application impact? Can anybody address that? >> Yeah, let me address that for a moment. I want to leverage some of the things that Bob said, Keith said, Zeus said, and David said, yeah. So I'm a bit of a contrarian in some of this. For example, on the chip side. As the chips get smaller, 14 nanometer, 10 nanometer, five nanometer, soon three nanometer, we talk about more cores, but the biggest problem on the chip is the interconnect from the chip 'cause the wires get smaller. People don't realize in 2004 the latency on those wires in the chips was 80 picoseconds. Today it's 1300 picoseconds. That's on the chip. This is why they're not getting faster. So we maybe getting a little bit slowing down in Moore's law. But even as we kind of conquer that you still have the interconnect problem and the interconnect problem goes beyond the chip. It goes within the system, composable architectures. It goes to the point where Keith made, ultimately you need a hybrid because what we're seeing, what I'm seeing and I'm talking to customers, the biggest issue they have is moving data. Whether it be in a chip, in a system, in a data center, between data centers, moving data is now the biggest gating item in performance. So if you want to move it from, let's say your transactional database to your machine learning, it's the bottleneck, it's moving the data. And so when you look at it from a distributed environment, now you've got to move the compute to the data. The only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute, the software, running on hardware closer to the data. Go ahead. >> So is this what you mean when Nicholson was talking about a shift from a processor centric world to a connectivity centric world? You're talking about moving the bits across all the different components, not having the processor you're saying is essentially becoming the bottleneck or the memory, I guess. >> Well, that's one of them and there's a lot of different bottlenecks, but it's the data movement itself. It's moving away from, wait, why do we need to move the data? Can we move the compute, the processing closer to the data? Because if we keep them separate and this has been a trend now where people are moving processing away from it. It's like the edge. I think it was Zeus or David. You were talking about the edge earlier. As you look at the edge, who defines the edge, right? Is the edge a closet or is it a sensor? If it's a sensor, how do you do AI at the edge? When you don't have enough power, you don't have enough computable. People were inventing chips to do that. To do all that at the edge, to do AI within the sensor, instead of moving the data to a data center or a cloud to do the processing. Because the lag in latency is always limited by speed of light. How fast can you move the electrons? And all this interconnecting, all the processing, and all the improvement we're seeing in the PCIE bus from three, to four, to five, to CXL, to a higher bandwidth on the network. And that's all great but none of that deals with the speed of light latency. And that's an-- Go ahead. >> You know Marc, no, I just want to just because what you're referring to could be looked at at a macro level, which I think is what you're describing. You can also look at it at a more micro level from a systems design perspective, right? I'm going to be the resident knuckle dragging hardware guy on the panel today. But it's exactly right. You moving compute closer to data includes concepts like peripheral cards that have built in intelligence, right? So again, in some of this testing that I'm referring to, we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for the like IO. Now you have essentially offload engines in the form of storage controllers, rate controllers, of course, for ethernet NICs, smart NICs. And so when you can have these sort of offload engines and we've gone through these waves over time. People think, well, wait a minute, raid controller and NVMe? You know, flash storage devices. Does that make sense? It turns out it does. Why? Because you're actually at a micro level doing exactly what you're referring to. You're bringing compute closer to the data. Now, closer to the data meaning closer to the data storage subsystem. It doesn't solve the macro issue that you're referring to but it is important. Again, going back to this idea of system design optimization, always chasing the bottleneck, plugging the holes. Someone needs to do that in this value chain in order to get the best value for every kilowatt hour of power and every dollar. >> Yeah. >> Well this whole drive performance has created some really interesting architectural designs, right? Like Nickelson, the rise of the DPU right? Brings more processing power into systems that already had a lot of processing power. There's also been some really interesting, you know, kind of innovation in the area of systems architecture too. If you look at the way Nvidia goes to market, their drive kit is a prebuilt piece of hardware, you know, optimized for self-driving cars, right? They partnered with Pure Storage and ARISTA to build that AI-ready infrastructure. I remember when I talked to Charlie Giancarlo, the CEO of Pure about when the three companies rolled that out. He said, "Look, if you're going to do AI, "you need good store. "You need fast storage, fast processor and fast network." And so for customers to be able to put that together themselves was very, very difficult. There's a lot of software that needs tuning as well. So the three companies partner together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it. And so in that case, in some ways the hardware was leading the software innovation. And so, the variety of different architectures we have today around hardware has really exploded. And I think it, part of the what Bob brought up at the beginning about the different chip design. >> Yeah, Bob talked about that earlier. Bob, I mean, most AI today is modeling, you know, and a lot of that's done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge. And that's a radically different architecture, Bob, isn't it? >> It is, it's a completely different architecture. And just to follow up on a couple points, excellent conversation guys. Dave talked about system architecture and really this that's what this boils down to, right? But it's looking at architecture at every level. I was talking about the individual different components the new interconnect methods. There's this new thing called UCIE universal connection. I forget what it stands answer for, but it's a mechanism for doing chiplet architectures, but then again, you have to take it up to the system level, 'cause it's all fine and good. If you have this SOC that's tuned and optimized, but it has to talk to the rest of the system. And that's where you see other issues. And you've seen things like CXL and other interconnect standards, you know, and nobody likes to talk about interconnect 'cause it's really wonky and really technical and not that sexy, but at the end of the day it's incredibly important exactly. To the other points that were being raised like mark raised, for example, about getting that compute closer to where the data is and that's where again, a diversity of chip architectures help and exactly to your last comment there Dave, putting that ability in an edge device is really at the cutting edge of what we're seeing on a semiconductor design and the ability to, for example, maybe it's an FPGA, maybe it's a dedicated AI chip. It's another kind of chip architecture that's being created to do that inferencing on the edge. Because again, it's that the cost and the challenges of moving lots of data, whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters. And the other thing is we're tackling bigger problems. So architecturally, not even just architecturally within a system, but when we think about DPUs and the sort of the east west data center movement conversation that we hear Nvidia and others talk about, it's about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data. So really is about tackling where the processing is needed, having the interconnect and the ability to get where the data you need to the right place at the right time. And because those needs are diversifying, we're just going to continue to see an explosion of different choices and options, which is going to make hardware even more essential I would argue than it is today. And so I think what we're going to see not only does hardware matter, it's going to matter even more in the future than it does now. >> Great, yeah. Great discussion, guys. I want to bring Keith back into the conversation here. Keith, if your main expertise in tech is provisioning LUNs, you probably you want to look for another job. So maybe clearly hardware matters, but with software defined everything, do people with hardware expertise matter outside of for instance, component manufacturers or cloud companies? I mean, VMware certainly changed the dynamic in servers. Dell just spun off its most profitable asset and VMware. So it obviously thinks hardware can stand alone. How does an enterprise architect view the shift to software defined hyperscale cloud and how do you see the shifting demand for skills in enterprise IT? >> So I love the question and I'll take a different view of it. If you're a data analyst and your primary value add is that you do ETL transformation, talk to a CDO, a chief data officer over midsize bank a little bit ago. He said 80% of his data scientists' time is done on ETL. Super not value ad. He wants his data scientists to do data science work. Chances are if your only value is that you do LUN provisioning, then you probably don't have a job now. The technologies have gotten much more intelligent. As infrastructure pros, we want to give infrastructure pros the opportunities to shine and I think the software defined nature and the automation that we're seeing vendors undertake, whether it's Dell, HP, Lenovo take your pick that Pure Storage, NetApp that are doing the automation and the ML needed so that these practitioners don't spend 80% of their time doing LUN provisioning and focusing on their true expertise, which is ensuring that data is stored. Data is retrievable, data's protected, et cetera. I think the shift is to focus on that part of the job that you're ensuring no matter where the data's at, because as my data is spread across the enterprise hybrid different types, you know, Dave, you talk about the super cloud a lot. If my data is in the super cloud, protecting that data and securing that data becomes much more complicated when than when it was me just procuring or provisioning LUNs. So when you say, where should the shift be, or look be, you know, focusing on the real value, which is making sure that customers can access data, can recover data, can get data at performance levels that they need within the price point. They need to get at those datasets and where they need it. We talked a lot about where they need out. One last point about this interconnecting. I have this vision and I think we all do of composable infrastructure. This idea that scaled out does not solve every problem. The cloud can give me infinite scale out. Sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances that single OS does not exist today. And the opportunity is to create composable infrastructure so that we solve a lot of these problems that just simply don't scale out. >> You know, wow. So many interesting points there. I had just interviewed Zhamak Dehghani, who's the founder of Data Mesh last week. And she made a really interesting point. She said, "Think about, we have separate stacks. "We have an application stack and we have "a data pipeline stack and the transaction systems, "the transaction database, we extract data from that," to your point, "We ETL it in, you know, it takes forever. "And then we have this separate sort of data stack." If we're going to inject more intelligence and data and AI into applications, those two stacks, her contention is they have to come together. And when you think about, you know, super cloud bringing compute to data, that was what Haduck was supposed to be. It ended up all sort of going into a central location, but it's almost a rhetorical question. I mean, it seems that that necessitates new thinking around hardware architectures as it kind of everything's the edge. And the other point is to your point, Keith, it's really hard to secure that. So when you can think about offloads, right, you've heard the stats, you know, Nvidia talks about it. Broadcom talks about it that, you know, that 30%, 25 to 30% of the CPU cycles are wasted on doing things like storage offloads, or networking or security. It seems like maybe Zeus you have a comment on this. It seems like new architectures need to come other to support, you know, all of that stuff that Keith and I just dispute. >> Yeah, and by the way, I do want to Keith, the question you just asked. Keith, it's the point I made at the beginning too about engineers do need to be more software-centric, right? They do need to have better software skills. In fact, I remember talking to Cisco about this last year when they surveyed their engineer base, only about a third of 'em had ever made an API call, which you know that that kind of shows this big skillset change, you know, that has to come. But on the point of architectures, I think the big change here is edge because it brings in distributed compute models. Historically, when you think about compute, even with multi-cloud, we never really had multi-cloud. We'd use multiple centralized clouds, but compute was always centralized, right? It was in a branch office, in a data center, in a cloud. With edge what we creates is the rise of distributed computing where we'll have an application that actually accesses different resources and at different edge locations. And I think Marc, you were talking about this, like the edge could be in your IoT device. It could be your campus edge. It could be cellular edge, it could be your car, right? And so we need to start thinkin' about how our applications interact with all those different parts of that edge ecosystem, you know, to create a single experience. The consumer apps, a lot of consumer apps largely works that way. If you think of like app like Uber, right? It pulls in information from all kinds of different edge application, edge services. And, you know, it creates pretty cool experience. We're just starting to get to that point in the business world now. There's a lot of security implications and things like that, but I do think it drives more architectural decisions to be made about how I deploy what data where and where I do my processing, where I do my AI and things like that. It actually makes the world more complicated. In some ways we can do so much more with it, but I think it does drive us more towards turnkey systems, at least initially in order to, you know, ensure performance and security. >> Right. Marc, I wanted to go to you. You had indicated to me that you wanted to chat about this a little bit. You've written quite a bit about the integration of hardware and software. You know, we've watched Oracle's move from, you know, buying Sun and then basically using that in a highly differentiated approach. Engineered systems. What's your take on all that? I know you also have some thoughts on the shift from CapEx to OPEX chime in on that. >> Sure. When you look at it, there are advantages to having one vendor who has the software and hardware. They can synergistically make them work together that you can't do in a commodity basis. If you own the software and somebody else has the hardware, I'll give you an example would be Oracle. As you talked about with their exit data platform, they literally are leveraging microcode in the Intel chips. And now in AMD chips and all the way down to Optane, they make basically AMD database servers work with Optane memory PMM in their storage systems, not MVME, SSD PMM. I'm talking about the cards itself. So there are advantages you can take advantage of if you own the stack, as you were putting out earlier, Dave, of both the software and the hardware. Okay, that's great. But on the other side of that, that tends to give you better performance, but it tends to cost a little more. On the commodity side it costs less but you get less performance. What Zeus had said earlier, it depends where you're running your application. How much performance do you need? What kind of performance do you need? One of the things about moving to the edge and I'll get to the OPEX CapEx in a second. One of the issues about moving to the edge is what kind of processing do you need? If you're running in a CCTV camera on top of a traffic light, how much power do you have? How much cooling do you have that you can run this? And more importantly, do you have to take the data you're getting and move it somewhere else and get processed and the information is sent back? I mean, there are companies out there like Brain Chip that have developed AI chips that can run on the sensor without a CPU. Without any additional memory. So, I mean, there's innovation going on to deal with this question of data movement. There's companies out there like Tachyon that are combining GPUs, CPUs, and DPUs in a single chip. Think of it as super composable architecture. They're looking at being able to do more in less. On the OPEX and CapEx issue. >> Hold that thought, hold that thought on the OPEX CapEx, 'cause we're running out of time and maybe you can wrap on that. I just wanted to pick up on something you said about the integrated hardware software. I mean, other than the fact that, you know, Michael Dell unlocked whatever $40 billion for himself and Silverlake, I was always a fan of a spin in with VMware basically become the Oracle of hardware. Now I know it would've been a nightmare for the ecosystem and culturally, they probably would've had a VMware brain drain, but what does anybody have any thoughts on that as a sort of a thought exercise? I was always a fan of that on paper. >> I got to eat a little crow. I did not like the Dale VMware acquisition for the industry in general. And I think it hurt the industry in general, HPE, Cisco walked away a little bit from that VMware relationship. But when I talked to customers, they loved it. You know, I got to be honest. They absolutely loved the integration. The VxRail, VxRack solution exploded. Nutanix became kind of a afterthought when it came to competing. So that spin in, when we talk about the ability to innovate and the ability to create solutions that you just simply can't create because you don't have the full stack. Dell was well positioned to do that with a potential span in of VMware. >> Yeah, we're going to be-- Go ahead please. >> Yeah, in fact, I think you're right, Keith, it was terrible for the industry. Great for Dell. And I remember talking to Chad Sakac when he was running, you know, VCE, which became Rack and Rail, their ability to stay in lockstep with what VMware was doing. What was the number one workload running on hyperconverged forever? It was VMware. So their ability to remain in lockstep with VMware gave them a huge competitive advantage. And Dell came out of nowhere in, you know, the hyper-converged market and just started taking share because of that relationship. So, you know, this sort I guess it's, you know, from a Dell perspective I thought it gave them a pretty big advantage that they didn't really exploit across their other properties, right? Networking and service and things like they could have given the dominance that VMware had. From an industry perspective though, I do think it's better to have them be coupled. So. >> I agree. I mean, they could. I think they could have dominated in super cloud and maybe they would become the next Oracle where everybody hates 'em, but they kick ass. But guys. We got to wrap up here. And so what I'm going to ask you is I'm going to go and reverse the order this time, you know, big takeaways from this conversation today, which guys by the way, I can't thank you enough phenomenal insights, but big takeaways, any final thoughts, any research that you're working on that you want highlight or you know, what you look for in the future? Try to keep it brief. We'll go in reverse order. Maybe Marc, you could start us off please. >> Sure, on the research front, I'm working on a total cost of ownership of an integrated database analytics machine learning versus separate services. On the other aspect that I would wanted to chat about real quickly, OPEX versus CapEx, the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software. As you use it, pay for what you use in arrears. The good thing about that is you're only paying for what you use, period. You're not for what you don't use. I mean, it's compute time, everything else. The bad side about that is you have no predictability in your bill. It's elastic, but every user I've talked to says every month it's different. And from a budgeting perspective, it's very hard to set up your budget year to year and it's causing a lot of nightmares. So it's just something to be aware of. From a CapEx perspective, you have no more CapEx if you're using that kind of base system but you lose a certain amount of control as well. So ultimately that's some of the issues. But my biggest point, my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about Keith or other aspects moving it between hybrid locations, moving it within a system, moving it within a chip. All those are key issues. >> Great, thank you. Okay, CTO advisor, give us your final thoughts. >> All right. Really, really great commentary. Again, I'm going to point back to us taking the walk that our customers are taking, which is trying to do this conversion of all primary data center to a hybrid of which I have this hard earned philosophy that enterprise IT is additive. When we add a service, we rarely subtract a service. So the landscape and service area what we support has to grow. So our research focuses on taking that walk. We are taking a monolithic application, decomposing that to containers, and putting that in a public cloud, and connecting that back private data center and telling that story and walking that walk with our customers. This has been a super enlightening panel. >> Yeah, thank you. Real, real different world coming. David Nicholson, please. >> You know, it really hearkens back to the beginning of the conversation. You talked about momentum in the direction of cloud. I'm sort of spending my time under the hood, getting grease under my fingernails, focusing on where still the lions share of spend will be in coming years, which is OnPrem. And then of course, obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture. I know we all know Sapphire Rapids pushed into the future. When's the next Intel release coming? Who knows? We think, you know, in 2023. There have been a lot of people standing by from a practitioner's standpoint asking, well, what do I do between now and then? Does it make sense to upgrade bits and pieces of hardware or go from a last generation to a current generation when we know the next generation is coming? And so I've been very, very focused on looking at how these connectivity components like rate controllers and NICs. I know it's not as sexy as talking about cloud but just how these opponents completely change the game and actually can justify movement from say a 14th-generation architecture to a 15th-generation architecture today, even though gen 16 is coming, let's say 12 months from now. So that's where I am. Keep my phone number in the Rolodex. I literally reference Rolodex intentionally because like I said, I'm in there under the hood and it's not as sexy. But yeah, so that's what I'm focused on Dave. >> Well, you know, to paraphrase it, maybe derivative paraphrase of, you know, Larry Ellison's rant on what is cloud? It's operating systems and databases, et cetera. Rate controllers and NICs live inside of clouds. All right. You know, one of the reasons I love working with you guys is 'cause have such a wide observation space and Zeus Kerravala you, of all people, you know you have your fingers in a lot of pies. So give us your final thoughts. >> Yeah, I'm not a propeller heady as my chip counterparts here. (all laugh) So, you know, I look at the world a little differently and a lot of my research I'm doing now is the impact that distributed computing has on customer employee experiences, right? You talk to every business and how the experiences they deliver to their customers is really differentiating how they go to market. And so they're looking at these different ways of feeding up data and analytics and things like that in different places. And I think this is going to have a really profound impact on enterprise IT architecture. We're putting more data, more compute in more places all the way down to like little micro edges and retailers and things like that. And so we need the variety. Historically, if you think back to when I was in IT you know, pre-Y2K, we didn't have a lot of choice in things, right? We had a server that was rack mount or standup, right? And there wasn't a whole lot of, you know, differences in choice. But today we can deploy, you know, these really high-performance compute systems on little blades inside servers or inside, you know, autonomous vehicles and things. I think the world from here gets... You know, just the choice of what we have and the way hardware and software works together is really going to, I think, change the world the way we do things. We're already seeing that, like I said, in the consumer world, right? There's so many things you can do from, you know, smart home perspective, you know, natural language processing, stuff like that. And it's starting to hit businesses now. So just wait and watch the next five years. >> Yeah, totally. The computing power at the edge is just going to be mind blowing. >> It's unbelievable what you can do at the edge. >> Yeah, yeah. Hey Z, I just want to say that we know you're not a propeller head and I for one would like to thank you for having your master's thesis hanging on the wall behind you 'cause we know that you studied basket weaving. >> I was actually a physics math major, so. >> Good man. Another math major. All right, Bob O'Donnell, you're going to bring us home. I mean, we've seen the importance of semiconductors and silicon in our everyday lives, but your last thoughts please. >> Sure and just to clarify, by the way I was a great books major and this was actually for my final paper. And so I was like philosophy and all that kind of stuff and literature but I still somehow got into tech. Look, it's been a great conversation and I want to pick up a little bit on a comment Zeus made, which is this it's the combination of the hardware and the software and coming together and the manner with which that needs to happen, I think is critically important. And the other thing is because of the diversity of the chip architectures and all those different pieces and elements, it's going to be how software tools evolve to adapt to that new world. So I look at things like what Intel's trying to do with oneAPI. You know, what Nvidia has done with CUDA. What other platform companies are trying to create tools that allow them to leverage the hardware, but also embrace the variety of hardware that is there. And so as those software development environments and software development tools evolve to take advantage of these new capabilities, that's going to open up a lot of interesting opportunities that can leverage all these new chip architectures. That can leverage all these new interconnects. That can leverage all these new system architectures and figure out ways to make that all happen, I think is going to be critically important. And then finally, I'll mention the research I'm actually currently working on is on private 5g and how companies are thinking about deploying private 5g and the potential for edge applications for that. So I'm doing a survey of several hundred us companies as we speak and really looking forward to getting that done in the next couple of weeks. >> Yeah, look forward to that. Guys, again, thank you so much. Outstanding conversation. Anybody going to be at Dell tech world in a couple of weeks? Bob's going to be there. Dave Nicholson. Well drinks on me and guys I really can't thank you enough for the insights and your participation today. Really appreciate it. Okay, and thank you for watching this special power panel episode of theCube Insights powered by ETR. Remember we publish each week on Siliconangle.com and wikibon.com. All these episodes they're available as podcasts. DM me or any of these guys. I'm at DVellante. You can email me at David.Vellante@siliconangle.com. Check out etr.ai for all the data. This is Dave Vellante. We'll see you next time. (upbeat music)

Published Date : Apr 25 2022

SUMMARY :

but the labor needed to go kind of around the horn the applications to those edge devices Zeus up next, please. on the performance requirements you have. that we can tap into It's really important that you optimize I mean, for years you worked for the applications that I need? that we were having earlier, okay. on software from the market And the point I made in breaking at the edge, in the data center, you know, and society and do you have any sense as and I'm feeling the pain. and it's all about the software, of the components you use. And I remember the early days And I mean, all the way back Yeah, and that's why you see And the answer to that is the disc had to go and do stuff. the compute to the data. So is this what you mean when Nicholson the processing closer to the data? And so when you can have kind of innovation in the area that the future is going to be the ability to get where and how do you see the shifting demand And the opportunity is to to support, you know, of that edge ecosystem, you know, that you wanted to chat One of the things about moving to the edge I mean, other than the and the ability to create solutions Yeah, we're going to be-- And I remember talking to Chad the order this time, you know, in the sense that you can use hardware us your final thoughts. So the landscape and service area Yeah, thank you. in the direction of cloud. You know, one of the reasons And I think this is going to The computing power at the edge you can do at the edge. on the wall behind you I was actually a of semiconductors and silicon and the manner with which Okay, and thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Marc StaimerPERSON

0.99+

Keith TownsonPERSON

0.99+

David NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

KeithPERSON

0.99+

Dave VellantePERSON

0.99+

MarcPERSON

0.99+

Bob O'DonnellPERSON

0.99+

DellORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

BobPERSON

0.99+

HPORGANIZATION

0.99+

LenovoORGANIZATION

0.99+

2004DATE

0.99+

Charlie GiancarloPERSON

0.99+

ZK ResearchORGANIZATION

0.99+

PatPERSON

0.99+

10 nanometerQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

10 gigQUANTITY

0.99+

25QUANTITY

0.99+

Pat GelsingerPERSON

0.99+

80%QUANTITY

0.99+

ARISTAORGANIZATION

0.99+

64 terabytesQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Zeus KerravalaPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Larry EllisonPERSON

0.99+

25 gigQUANTITY

0.99+

14 nanometerQUANTITY

0.99+

2017DATE

0.99+

2016DATE

0.99+

Norman RicePERSON

0.99+

OracleORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Michael DellPERSON

0.99+

69%QUANTITY

0.99+

30%QUANTITY

0.99+

OPEXORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

$40 billionQUANTITY

0.99+

Dragon Slayer ConsultingORGANIZATION

0.99+

Breaking Analysis: Technology & Architectural Considerations for Data Mesh


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data driven insights from theCUBE in ETR, this is Breaking Analysis with Dave Vellante. >> The introduction in socialization of data mesh has caused practitioners, business technology executives, and technologists to pause, and ask some probing questions about the organization of their data teams, their data strategies, future investments, and their current architectural approaches. Some in the technology community have embraced the concept, others have twisted the definition, while still others remain oblivious to the momentum building around data mesh. Here we are in the early days of data mesh adoption. Organizations that have taken the plunge will tell you that aligning stakeholders is a non-trivial effort, but necessary to break through the limitations that monolithic data architectures and highly specialized teams have imposed over frustrated business and domain leaders. However, practical data mesh examples often lie in the eyes of the implementer, and may not strictly adhere to the principles of data mesh. Now, part of the problem is lack of open technologies and standards that can accelerate adoption and reduce friction, and that's what we're going to talk about today. Some of the key technology and architecture questions around data mesh. Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR, and in this Breaking Analysis, we welcome back the founder of data mesh and director of Emerging Technologies at Thoughtworks, Zhamak Dehghani. Hello, Zhamak. Thanks for being here today. >> Hi Dave, thank you for having me back. It's always a delight to connect and have a conversation. Thank you. >> Great, looking forward to it. Okay, so before we get into it in the technology details, I just want to quickly share some data from our friends at ETR. You know, despite the importance of data initiative since the pandemic, CIOs and IT organizations have had to juggle of course, a few other priorities, this is why in the survey data, cyber and cloud computing are rated as two most important priorities. Analytics and machine learning, and AI, which are kind of data topics, still make the top of the list, well ahead of many other categories. And look, a sound data architecture and strategy is fundamental to digital transformations, and much of the past two years, as we've often said, has been like a forced march into digital. So while organizations are moving forward, they really have to think hard about the data architecture decisions that they make, because it's going to impact them, Zhamak, for years to come, isn't it? >> Yes, absolutely. I mean, we are moving really from, slowly moving from reason based logical algorithmic to model based computation and decision making, where we exploit the patterns and signals within the data. So data becomes a very important ingredient, of not only decision making, and analytics and discovering trends, but also the features and applications that we build for the future. So we can't really ignore it, and as we see, some of the existing challenges around getting value from data is not necessarily that no longer is access to computation, is actually access to trustworthy, reliable data at scale. >> Yeah, and you see these domains coming together with the cloud and obviously it has to be secure and trusted, and that's why we're here today talking about data mesh. So let's get into it. Zhamak, first, your new book is out, 'Data Mesh: Delivering Data-Driven Value at Scale' just recently published, so congratulations on getting that done, awesome. Now in a recent presentation, you pulled excerpts from the book and we're going to talk through some of the technology and architectural considerations. Just quickly for the audience, four principles of data mesh. Domain driven ownership, data as product, self-served data platform and federated computational governance. So I want to start with self-serve platform and some of the data that you shared recently. You say that, "Data mesh serves autonomous domain oriented teams versus existing platforms, which serve a centralized team." Can you elaborate? >> Sure. I mean the role of the platform is to lower the cognitive load for domain teams, for people who are focusing on the business outcomes, the technologies that are building the applications, to really lower the cognitive load for them, to be able to work with data. Whether they are building analytics, automated decision making, intelligent modeling. They need to be able to get access to data and use it. So the role of the platform, I guess, just stepping back for a moment is to empower and enable these teams. Data mesh by definition is a scale out model. It's a decentralized model that wants to give autonomy to cross-functional teams. So it is core requires a set of tools that work really well in that decentralized model. When we look at the existing platforms, they try to achieve this similar outcome, right? Lower the cognitive load, give the tools to data practitioners, to manage data at scale because today centralized teams, really their job, the centralized data teams, their job isn't really directly aligned with a one or two or different, you know, business units and business outcomes in terms of getting value from data. Their job is manage the data and make the data available for then those cross-functional teams or business units to use the data. So the platforms they've been given are really centralized around or tuned to work with this structure as a team, structure of centralized team. Although on the surface, it seems that why not? Why can't I use my, you know, cloud storage or computation or data warehouse in a decentralized way? You should be able to, but some changes need to happen to those online platforms. As an example, some cloud providers simply have hard limits on the number of like account storage, storage accounts that you can have. Because they never envisaged you have hundreds of lakes. They envisage one or two, maybe 10 lakes, right. They envisage really centralizing data, not decentralizing data. So I think we see a shift in thinking about enabling autonomous independent teams versus a centralized team. >> So just a follow up if I may, we could be here for a while. But so this assumes that you've sorted out the organizational considerations? That you've defined all the, what a data product is and a sub product. And people will say, of course we use the term monolithic as a pejorative, let's face it. But the data warehouse crowd will say, "Well, that's what data march did. So we got that covered." But Europe... The primest of data mesh, if I understand it is whether it's a data march or a data mart or a data warehouse, or a data lake or whatever, a snowflake warehouse, it's a node on the mesh. Okay. So don't build your organization around the technology, let the technology serve the organization is that-- >> That's a perfect way of putting it, exactly. I mean, for a very long time, when we look at decomposition of complexity, we've looked at decomposition of complexity around technology, right? So we have technology and that's maybe a good segue to actually the next item on that list that we looked at. Oh, I need to decompose based on whether I want to have access to raw data and put it on the lake. Whether I want to have access to model data and put it on the warehouse. You know I need to have a team in the middle to move the data around. And then try to figure organization into that model. So data mesh really inverses that, and as you said, is look at the organizational structure first. Then scale boundaries around which your organization and operation can scale. And then the second layer look at the technology and how you decompose it. >> Okay. So let's go to that next point and talk about how you serve and manage autonomous interoperable data products. Where code, data policy you say is treated as one unit. Whereas your contention is existing platforms of course have independent management and dashboards for catalogs or storage, et cetera. Maybe we double click on that a bit. >> Yeah. So if you think about that functional, or technical decomposition, right? Of concerns, that's one way, that's a very valid way of decomposing, complexity and concerns. And then build solutions, independent solutions to address them. That's what we see in the technology landscape today. We will see technologies that are taking care of your management of data, bring your data under some sort of a control and modeling. You'll see technology that moves that data around, will perform various transformations and computations on it. And then you see technology that tries to overlay some level of meaning. Metadata, understandability, discovery was the end policy, right? So that's where your data processing kind of pipeline technologies versus data warehouse, storage, lake technologies, and then the governance come to play. And over time, we decomposed and we compose, right? Deconstruct and reconstruct back this together. But, right now that's where we stand. I think for data mesh really to become a reality, as in independent sources of data and teams can responsibly share data in a way that can be understood right then and there can impose policies, right then when the data gets accessed in that source and in a resilient manner, like in a way that data changes structure of the data or changes to the scheme of the data, doesn't have those downstream down times. We've got to think about this new nucleus or new units of data sharing. And we need to really bring back transformation and governing data and the data itself together around these decentralized nodes on the mesh. So that's another, I guess, deconstruction and reconstruction that needs to happen around the technology to formulate ourselves around the domains. And again the data and the logic of the data itself, the meaning of the data itself. >> Great. Got it. And we're going to talk more about the importance of data sharing and the implications. But the third point deals with how operational, analytical technologies are constructed. You've got an app DevStack, you've got a data stack. You've made the point many times actually that we've contextualized our operational systems, but not our data systems, they remain separate. Maybe you could elaborate on this point. >> Yes. I think this is, again, has a historical background and beginning. For a really long time, applications have dealt with features and the logic of running the business and encapsulating the data and the state that they need to run that feature or run that business function. And then we had for anything analytical driven, which required access data across these applications and across the longer dimension of time around different subjects within the organization. This analytical data, we had made a decision that, "Okay, let's leave those applications aside. Let's leave those databases aside. We'll extract the data out and we'll load it, or we'll transform it and put it under the analytical kind of a data stack and then downstream from it, we will have analytical data users, the data analysts, the data sciences and the, you know, the portfolio of users that are growing use that data stack. And that led to this really separation of dual stack with point to point integration. So applications went down the path of transactional databases or urban document store, but using APIs for communicating and then we've gone to, you know, lake storage or data warehouse on the other side. If we are moving and that again, enforces the silo of data versus app, right? So if we are moving to the world that our missions that are ambitions around making applications, more intelligent. Making them data driven. These two worlds need to come closer. As in ML Analytics gets embedded into those app applications themselves. And the data sharing, as a very essential ingredient of that, gets embedded and gets closer, becomes closer to those applications. So, if you are looking at this now cross-functional, app data, based team, right? Business team, then the technology stacks can't be so segregated, right? There has to be a continuum of experience from app delivery, to sharing of the data, to using that data, to embed models back into those applications. And that continuum of experience requires well integrated technologies. I'll give you an example, which actually in some sense, we are somewhat moving to that direction. But if we are talking about data sharing or data modeling and applications use one set of APIs, you know, HTTP compliant, GraQL or RAC APIs. And on the other hand, you have proprietary SQL, like connect to my database and run SQL. Like those are very two different models of representing and accessing data. So we kind of have to harmonize or integrate those two worlds a bit more closely to achieve that domain oriented cross-functional teams. >> Yeah. We are going to talk about some of the gaps later and actually you look at them as opportunities, more than barriers. But they are barriers, but they're opportunities for more innovation. Let's go on to the fourth one. The next point, it deals with the roles that the platform serves. Data mesh proposes that domain experts own the data and take responsibility for it end to end and are served by the technology. Kind of, we referenced that before. Whereas your contention is that today, data systems are really designed for specialists. I think you use the term hyper specialists a lot. I love that term. And the generalist are kind of passive bystanders waiting in line for the technical teams to serve them. >> Yes. I mean, if you think about the, again, the intention behind data mesh was creating a responsible data sharing model that scales out. And I challenge any organization that has a scaled ambitions around data or usage of data that relies on small pockets of very expensive specialists resources, right? So we have no choice, but upscaling cross-scaling. The majority population of our technologists, we often call them generalists, right? That's a short hand for people that can really move from one technology to another technology. Sometimes we call them pandric people sometimes we call them T-shaped people. But regardless, like we need to have ability to really mobilize our generalists. And we had to do that at Thoughtworks. We serve a lot of our clients and like many other organizations, we are also challenged with hiring specialists. So we have tested the model of having a few specialists, really conveying and translating the knowledge to generalists and bring them forward. And of course, platform is a big enabler of that. Like what is the language of using the technology? What are the APIs that delight that generalist experience? This doesn't mean no code, low code. We have to throw away in to good engineering practices. And I think good software engineering practices remain to exist. Of course, they get adopted to the world of data to build resilient you know, sustainable solutions, but specialty, especially around kind of proprietary technology is going to be a hard one to scale. >> Okay. I'm definitely going to come back and pick your brain on that one. And, you know, your point about scale out in the examples, the practical examples of companies that have implemented data mesh that I've talked to. I think in all cases, you know, there's only a handful that I've really gone deep with, but it was their hadoop instances, their clusters wouldn't scale, they couldn't scale the business and around it. So that's really a key point of a common pattern that we've seen now. I think in all cases, they went to like the data lake model and AWS. And so that maybe has some violation of the principles, but we'll come back to that. But so let me go on to the next one. Of course, data mesh leans heavily, toward this concept of decentralization, to support domain ownership over the centralized approaches. And we certainly see this, the public cloud players, database companies as key actors here with very large install bases, pushing a centralized approach. So I guess my question is, how realistic is this next point where you have decentralized technologies ruling the roost? >> I think if you look at the history of places, in our industry where decentralization has succeeded, they heavily relied on standardization of connectivity with, you know, across different components of technology. And I think right now you are right. The way we get value from data relies on collection. At the end of the day, collection of data. Whether you have a deep learning machinery model that you're training, or you have, you know, reports to generate. Regardless, the model is bring your data to a place that you can collect it, so that we can use it. And that leads to a naturally set of technologies that try to operate as a full stack integrated proprietary with no intention of, you know, opening, data for sharing. Now, conversely, if you think about internet itself, web itself, microservices, even at the enterprise level, not at the planetary level, they succeeded as decentralized technologies to a large degree because of their emphasis on open net and openness and sharing, right. API sharing. We don't talk about, in the API worlds, like we don't say, you know, "I will build a platform to manage your logical applications." Maybe to a degree but we actually moved away from that. We say, "I'll build a platform that opens around applications to manage your APIs, manage your interfaces." Right? Give you access to API. So I think the shift needs to... That definition of decentralized there means really composable, open pieces of the technology that can play nicely with each other, rather than a full stack, all have control of your data yet being somewhat decentralized within the boundary of my platform. That's just simply not going to scale if data needs to come from different platforms, different locations, different geographical locations, it needs to rethink. >> Okay, thank you. And then the final point is, is data mesh favors technologies that are domain agnostic versus those that are domain aware. And I wonder if you could help me square the circle cause it's nuanced and I'm kind of a 100 level student of your work. But you have said for example, that the data teams lack context of the domain and so help us understand what you mean here in this case. >> Sure. Absolutely. So as you said, we want to take... Data mesh tries to give autonomy and decision making power and responsibility to people that have the context of those domains, right? The people that are really familiar with different business domains and naturally the data that that domain needs, or that naturally the data that domains shares. So if the intention of the platform is really to give the power to people with most relevant and timely context, the platform itself naturally becomes as a shared component, becomes domain agnostic to a large degree. Of course those domains can still... The platform is a (chuckles) fairly overloaded world. As in, if you think about it as a set of technology that abstracts complexity and allows building the next level solutions on top, those domains may have their own set of platforms that are very much doing agnostic. But as a generalized shareable set of technologies or tools that allows us share data. So that piece of technology needs to relinquish the knowledge of the context to the domain teams and actually becomes domain agnostic. >> Got it. Okay. Makes sense. All right. Let's shift gears here. Talk about some of the gaps and some of the standards that are needed. You and I have talked about this a little bit before, but this digs deeper. What types of standards are needed? Maybe you could walk us through this graphic, please. >> Sure. So what I'm trying to depict here is that if we imagine a world that data can be shared from many different locations, for a variety of analytical use cases, naturally the boundary of what we call a node on the mesh will encapsulates internally a fair few pieces. It's not just the boundary of that, not on the mesh, is the data itself that it's controlling and updating and maintaining. It's of course a computation and the code that's responsible for that data. And then the policies that continue to govern that data as long as that data exists. So if that's the boundary, then if we shift that focus from implementation details, that we can leave that for later, what becomes really important is the scene or the APIs and interfaces that this node exposes. And I think that's where the work that needs to be done and the standards that are missing. And we want the scene and those interfaces be open because that allows, you know, different organizations with different boundaries of trust to share data. Not only to share data to kind of move that data to yes, another location, to share the data in a way that distributed workloads, distributed analytics, distributed machine learning model can happen on the data where it is. So if you follow that line of thinking around the centralization and connection of data versus collection of data, I think the very, very important piece of it that needs really deep thinking, and I don't claim that I have done that, is how do we share data responsibly and sustainably, right? That is not brittle. If you think about it today, the ways we share data, one of the very common ways is around, I'll give you a JDC endpoint, or I give you an endpoint to your, you know, database of choice. And now as technology, whereas a user actually, you can now have access to the schema of the underlying data and then run various queries or SQL queries on it. That's very simple and easy to get started with. That's why SQL is an evergreen, you know, standard or semi standard, pseudo standard that we all use. But it's also very brittle, because we are dependent on a underlying schema and formatting of the data that's been designed to tell the computer how to store and manage the data. So I think that the data sharing APIs of the future really need to think about removing this brittle dependencies, think about sharing, not only the data, but what we call metadata, I suppose. Additional set of characteristics that is always shared along with data to make the data usage, I suppose ethical and also friendly for the users and also, I think we have to... That data sharing API, the other element of it, is to allow kind of computation to run where the data exists. So if you think about SQL again, as a simple primitive example of computation, when we select and when we filter and when we join, the computation is happening on that data. So maybe there is a next level of articulating, distributed computational data that simply trains models, right? Your language primitives change in a way to allow sophisticated analytical workloads run on the data more responsibly with policies and access control and force. So I think that output port that I mentioned simply is about next generation data sharing, responsible data sharing APIs. Suitable for decentralized analytical workloads. >> So I'm not trying to bait you here, but I have a follow up as well. So you schema, for all its good creates constraints. No schema on right, that didn't work, cause it was just a free for all and it created the data swamps. But now you have technology companies trying to solve that problem. Take Snowflake for example, you know, enabling, data sharing. But it is within its proprietary environment. Certainly Databricks doing something, you know, trying to come at it from its angle, bringing some of the best to data warehouse, with the data science. Is your contention that those remain sort of proprietary and defacto standards? And then what we need is more open standards? Maybe you could comment. >> Sure. I think the two points one is, as you mentioned. Open standards that allow... Actually make the underlying platform invisible. I mean my litmus test for a technology provider to say, "I'm a data mesh," (laughs) kind of compliant is, "Is your platform invisible?" As in, can I replace it with another and yet get the similar data sharing experience that I need? So part of it is that. Part of it is open standards, they're not really proprietary. The other angle for kind of sharing data across different platforms so that you know, we don't get stuck with one technology or another is around APIs. It is around code that is protecting that internal schema. So where we are on the curve of evolution of technology, right now we are exposing the internal structure of the data. That is designed to optimize certain modes of access. We're exposing that to the end client and application APIs, right? So the APIs that use the data today are very much aware that this database was optimized for machine learning workloads. Hence you will deal with a columnar storage of the file versus this other API is optimized for a very different, report type access, relational access and is optimized around roles. I think that should become irrelevant in the API sharing of the future. Because as a user, I shouldn't care how this data is internally optimized, right? The language primitive that I'm using should be really agnostic to the machine optimization underneath that. And if we did that, perhaps this war between warehouse or lake or the other will become actually irrelevant. So we're optimizing for that human best human experience, as opposed to the best machine experience. We still have to do that but we have to make that invisible. Make that an implementation concern. So that's another angle of what should... If we daydream together, the best experience and resilient experience in terms of data usage than these APIs with diagnostics to the internal storage structure. >> Great, thank you for that. We've wrapped our ankles now on the controversy, so we might as well wade all the way in, I can't let you go without addressing some of this. Which you've catalyzed, which I, by the way, I see as a sign of progress. So this gentleman, Paul Andrew is an architect and he gave a presentation I think last night. And he teased it as quote, "The theory from Zhamak Dehghani versus the practical experience of a technical architect, AKA me," meaning him. And Zhamak, you were quick to shoot back that data mesh is not theory, it's based on practice. And some practices are experimental. Some are more baked and data mesh really avoids by design, the specificity of vendor or technology. Perhaps you intend to frame your post as a technology or vendor specific, specific implementation. So touche, that was excellent. (Zhamak laughs) Now you don't need me to defend you, but I will anyway. You spent 14 plus years as a software engineer and the better part of a decade consulting with some of the most technically advanced companies in the world. But I'm going to push you a little bit here and say, some of this tension is of your own making because you purposefully don't talk about technologies and vendors. Sometimes doing so it's instructive for us neophytes. So, why don't you ever like use specific examples of technology for frames of reference? >> Yes. My role is pushes to the next level. So, you know everybody picks their fights, pick their battles. My role in this battle is to push us to think beyond what's available today. Of course, that's my public persona. On a day to day basis, actually I work with clients and existing technology and I think at Thoughtworks we have given the talk we gave a case study talk with a colleague of mine and I intentionally got him to talk about (indistinct) I want to talk about the technology that we use to implement data mesh. And the reason I haven't really embraced, in my conversations, the specific technology. One is, I feel the technology solutions we're using today are still not ready for the vision. I mean, we have to be in this transitional step, no matter what we have to be pragmatic, of course, and practical, I suppose. And use the existing vendors that exist and I wholeheartedly embrace that, but that's just not my role, to show that. I've gone through this transformation once before in my life. When microservices happened, we were building microservices like architectures with technology that wasn't ready for it. Big application, web application servers that were designed to run these giant monolithic applications. And now we're trying to run little microservices onto them. And the tail was riding the dock, the environmental complexity of running these services was consuming so much of our effort that we couldn't really pay attention to that business logic, the business value. And that's where we are today. The complexity of integrating existing technologies is really overwhelmingly, capturing a lot of our attention and cost and effort, money and effort as opposed to really focusing on the data product themselves. So it's just that's the role I have, but it doesn't mean that, you know, we have to rebuild the world. We've got to do with what we have in this transitional phase until the new generation, I guess, technologies come around and reshape our landscape of tools. >> Well, impressive public discipline. Your point about microservice is interesting because a lot of those early microservices, weren't so micro and for the naysayers look past this, not prologue, but Thoughtworks was really early on in the whole concept of microservices. So be very excited to see how this plays out. But now there was some other good comments. There was one from a gentleman who said the most interesting aspects of data mesh are organizational. And that's how my colleague Sanji Mohan frames data mesh versus data fabric. You know, I'm not sure, I think we've sort of scratched the surface today that data today, data mesh is more. And I still think data fabric is what NetApp defined as software defined storage infrastructure that can serve on-prem and public cloud workloads back whatever, 2016. But the point you make in the thread that we're showing you here is that you're warning, and you referenced this earlier, that the segregating different modes of access will lead to fragmentation. And we don't want to repeat the mistakes of the past. >> Yes, there are comments around. Again going back to that original conversation that we have got this at a macro level. We've got this tendency to decompose complexity based on technical solutions. And, you know, the conversation could be, "Oh, I do batch or you do a stream and we are different."' They create these bifurcations in our decisions based on the technology where I do events and you do tables, right? So that sort of segregation of modes of access causes accidental complexity that we keep dealing with. Because every time in this tree, you create a new branch, you create new kind of new set of tools and then somehow need to be point to point integrated. You create new specialization around that. So the least number of branches that we have, and think about really about the continuum of experiences that we need to create and technologies that simplify, that continuum experience. So one of the things, for example, give you a past experience. I was really excited around the papers and the work that came around on Apache Beam, and generally flow based programming and stream processing. Because basically they were saying whether you are doing batch or whether you're doing streaming, it's all one stream. And sometimes the window of time, narrows and sometimes the window of time over which you're computing, widens and at the end of today, is you are just getting... Doing the stream processing. So it is those sort of notions that simplify and create continuum of experience. I think resonate with me personally, more than creating these tribal fights of this type versus that mode of access. So that's why data mesh naturally selects kind of this multimodal access to support end users, right? The persona of end users. >> Okay. So the last topic I want to hit, this whole discussion, the topic of data mesh it's highly nuanced, it's new, and people are going to shoehorn data mesh into their respective views of the world. And we talked about lake houses and there's three buckets. And of course, the gentleman from LinkedIn with Azure, Microsoft has a data mesh community. See you're going to have to enlist some serious army of enforcers to adjudicate. And I wrote some of the stuff down. I mean, it's interesting. Monte Carlo has a data mesh calculator. Starburst is leaning in, chaos. Search sees themselves as an enabler. Oracle and Snowflake both use the term data mesh. And then of course you've got big practitioners J-P-M-C, we've talked to Intuit, Orlando, HelloFresh has been on, Netflix has this event based sort of streaming implementation. So my question is, how realistic is it that the clarity of your vision can be implemented and not polluted by really rich technology companies and others? (Zhamak laughs) >> Is it even possible, right? Is it even possible? That's a yes. That's why I practice then. This is why I should practice things. Cause I think, it's going to be hard. What I'm hopeful, is that the socio-technical, Leveling Data mentioned that this is a socio-technical concern or solution, not just a technology solution. Hopefully always brings us back to, you know, the reality that vendors try to sell you safe oil that solves all of your problems. (chuckles) All of your data mesh problems. It's just going to cause more problem down the track. So we'll see, time will tell Dave and I count on you as one of those members of, (laughs) you know, folks that will continue to share their platform. To go back to the roots, as why in the first place? I mean, I dedicated a whole part of the book to 'Why?' Because we get, as you said, we get carried away with vendors and technology solution try to ride a wave. And in that story, we forget the reason for which we even making this change and we are going to spend all of this resources. So hopefully we can always come back to that. >> Yeah. And I think we can. I think you have really given this some deep thought and as we pointed out, this was based on practical knowledge and experience. And look, we've been trying to solve this data problem for a long, long time. You've not only articulated it well, but you've come up with solutions. So Zhamak, thank you so much. We're going to leave it there and I'd love to have you back. >> Thank you for the conversation. I really enjoyed it. And thank you for sharing your platform to talk about data mesh. >> Yeah, you bet. All right. And I want to thank my colleague, Stephanie Chan, who helps research topics for us. Alex Myerson is on production and Kristen Martin, Cheryl Knight and Rob Hoff on editorial. Remember all these episodes are available as podcasts, wherever you listen. And all you got to do is search Breaking Analysis Podcast. Check out ETR's website at etr.ai for all the data. And we publish a full report every week on wikibon.com, siliconangle.com. You can reach me by email david.vellante@siliconangle.com or DM me @dvellante. Hit us up on our LinkedIn post. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (bright music)

Published Date : Apr 20 2022

SUMMARY :

bringing you data driven insights Organizations that have taken the plunge and have a conversation. and much of the past two years, and as we see, and some of the data and make the data available But the data warehouse crowd will say, in the middle to move the data around. and talk about how you serve and the data itself together and the implications. and the logic of running the business and are served by the technology. to build resilient you I think in all cases, you know, And that leads to a that the data teams lack and naturally the data and some of the standards that are needed. and formatting of the data and it created the data swamps. We're exposing that to the end client and the better part of a decade So it's just that's the role I have, and for the naysayers look and at the end of today, And of course, the gentleman part of the book to 'Why?' and I'd love to have you back. And thank you for sharing your platform etr.ai for all the data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kristen MartinPERSON

0.99+

Rob HoffPERSON

0.99+

Cheryl KnightPERSON

0.99+

Stephanie ChanPERSON

0.99+

Alex MyersonPERSON

0.99+

DavePERSON

0.99+

ZhamakPERSON

0.99+

oneQUANTITY

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

10 lakesQUANTITY

0.99+

Sanji MohanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Paul AndrewPERSON

0.99+

twoQUANTITY

0.99+

NetflixORGANIZATION

0.99+

Zhamak DehghaniPERSON

0.99+

Data Mesh: Delivering Data-Driven Value at ScaleTITLE

0.99+

BostonLOCATION

0.99+

OracleORGANIZATION

0.99+

14 plus yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two pointsQUANTITY

0.99+

siliconangle.comOTHER

0.99+

second layerQUANTITY

0.99+

2016DATE

0.99+

LinkedInORGANIZATION

0.99+

todayDATE

0.99+

SnowflakeORGANIZATION

0.99+

hundreds of lakesQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

theCUBE StudiosORGANIZATION

0.98+

SQLTITLE

0.98+

one unitQUANTITY

0.98+

firstQUANTITY

0.98+

100 levelQUANTITY

0.98+

third pointQUANTITY

0.98+

DatabricksORGANIZATION

0.98+

EuropeLOCATION

0.98+

three bucketsQUANTITY

0.98+

ETRORGANIZATION

0.98+

DevStackTITLE

0.97+

OneQUANTITY

0.97+

wikibon.comOTHER

0.97+

bothQUANTITY

0.97+

ThoughtworksORGANIZATION

0.96+

one setQUANTITY

0.96+

one streamQUANTITY

0.96+

IntuitORGANIZATION

0.95+

one wayQUANTITY

0.93+

two worldsQUANTITY

0.93+

HelloFreshORGANIZATION

0.93+

this weekDATE

0.93+

last nightDATE

0.91+

fourth oneQUANTITY

0.91+

SnowflakeTITLE

0.91+

two different modelsQUANTITY

0.91+

ML AnalyticsTITLE

0.91+

Breaking AnalysisTITLE

0.87+

two worldsQUANTITY

0.84+

Breaking Analysis: Data Mesh...A New Paradigm for Data Management


 

from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante data mesh is a new way of thinking about how to use data to create organizational value leading edge practitioners are beginning to implement data mesh in earnest and importantly data mesh is not a single tool or a rigid reference architecture if you will rather it's an architectural and organizational model that's really designed to address the shortcomings of decades of data challenges and failures many of which we've talked about on the cube as important by the way it's a new way to think about how to leverage data at scale across an organization and across ecosystems data mesh in our view will become the defining paradigm for the next generation of data excellence hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we welcome the founder and creator of data mesh author thought leader technologist jamaak dagani shamak thank you for joining us today good to see you hi dave it's great to be here all right real quick let's talk about what we're going to cover i'll introduce or reintroduce you to jamaac she joined us earlier this year in our cube on cloud program she's the director of emerging tech at dot works north america and a thought leader practitioner software engineer architect and a passionate advocate for decentralized technology solutions and and data architectures and jamaa since we last had you on as a guest which was less than a year ago i think you've written two books in your spare time one on data mesh and another called software architecture the hard parts both published by o'reilly so how are you you've been busy i've been busy yes um good it's been a great year it's been a busy year i'm looking forward to the end of the year and the end of these two books but it's great to be back and um speaking with you well you got to be pleased with the the momentum that data mesh has and let's just jump back to the agenda for a bit and get that out of the way we're going to set the stage by sharing some etr data our partner our data partner on the spending profile and some of the key data sectors and then we're going to review the four key principles of data mesh just it's always worthwhile to sort of set that framework we'll talk a little bit about some of the dependencies and the data flows and we're really going to dig today into principle number three and a bit around the self-service data platforms and to that end we're going to talk about some of the learnings that shamak has captured since she embarked on the datamess journey with her colleagues and her clients and we specifically want to talk about some of the successful models for building the data mesh experience and then we're going to hit on some practical advice and we'll wrap with some thought exercises maybe a little tongue-in-cheek some of the community questions that we get so the first thing i want to do we'll just get this out of the way is introduce the spending climate we use this xy chart to do this we do this all the time it shows the spending profiles and the etr data set for some of the more data related sectors of the ecr etr taxonomy they they dropped their october data last friday so i'm using the july survey here we'll get into the october survey in future weeks but about 1500 respondents i don't see a dramatic change coming in the october survey but the the y-axis is net score or spending momentum the horizontal axis is market share or presence in the data set and that red line that 40 percent anything over that we consider elevated so for the past eight quarters or so we've seen machine learning slash ai rpa containers and cloud is the four areas where cios and technology buyers have shown the highest net scores and as we've said what's so impressive for cloud is it's both pervasive and it shows high velocity from a spending standpoint and we plotted the three other data related areas database edw analytics bi and big data and storage the first two well under the red line are still elevated the storage market continues to kind of plot along and we've we've plotted the outsourced it just to balance it out for context that's an area that's not so hot right now so i just want to point out that these areas ai automation containers and cloud they're all relevant to data and they're fundamental building blocks of data architectures as are the two that are directly related to data database and analytics and of course storage so it just gives you a picture of the spending sector so i wanted to share this slide jamark uh that that we presented in that you presented in your webinar i love this it's a taxonomy put together by matt turk who's a vc and he called this the the mad landscape machine learning and ai and data and jamock the key point here is there's no lack of tooling you've you've made the the data mesh concept sort of tools agnostic it's not like we need more tools to succeed in data mesh right absolutely great i think we have plenty of tools i think what's missing is a meta architecture that defines the landscape in a way that it's in step with organizational growth and then defines that meta architecture in a way that these tools can actually interoperable and to operate and integrate really well like the the clients right now have a lot of challenges in terms of picking the right tool regardless of the technology they go down the path either they have to go in and big you know bite into a big data solution and then try to fit the other integrated solutions around it or as you see go to that menu of large list of applications and spend a lot of time trying to kind of integrate and stitch this tooling together so i'm hoping that data mesh creates that kind of meta architecture for tools to interoperate and plug in and i think our conversation today around self-subjective platform um hopefully eliminate that yeah we'll definitely circle back because that's one of the questions we get all the time from the community okay let's review the four main principles of data mesh for those who might not be familiar with it and those who are it's worth reviewing jamar allow me to introduce them and then we can discuss a bit so a big frustration i hear constantly from practitioners is that the data teams don't have domain context the data team is separated from the lines of business and as a result they have to constantly context switch and as such there's a lack of alignment so principle number one is focused on putting end-to-end data ownership in the hands of the domain or what i would call the business lines the second principle is data as a product which does cause people's brains to hurt sometimes but it's a key component and if you start sort of thinking about it you'll and talking to people who have done it it actually makes a lot of sense and this leads to principle number three which is a self-serve data infrastructure which we're going to drill into quite a bit today and then the question we always get is when we introduce data meshes how to enforce governance in a federated model so let me bring up a more detailed slide jamar with the dependencies and ask you to comment please sure but as you said the the really the root cause we're trying to address is the siloing of the data external to where the action happens where the data gets produced where the data needs to be shared when the data gets used right in the context of the business so it's about the the really the root cause of the centralization gets addressed by distribution of the accountability end to end back to the domains and these domains this distribution of accountability technical accountability to the domains have already happened in the last you know decade or so we saw the transition from you know one general i.t addressing all of the needs of the organization to technology groups within the itu or even outside of the iit aligning themselves to build applications and services that the different business units need so what data mesh does it just extends that model and say okay we're aligning business with the tech and data now right so both application of the data in ml or inside generation in the domains related to the domain's needs as well as sharing the data that the domains are generating with the rest of the organization but the moment you do that then you have to solve other problems that may arise and that you know gives birth to the second principle which is about um data as a product as a way of preventing data siloing happening within the domain so changing the focus of the domains that are now producing data from i'm just going to create that data i collect for myself and that satisfy my needs to in fact the responsibility of domain is to share the data as a product with all of the you know wonderful characteristics that a product has and i think that leads to really interesting architectural and technical implications of what actually constitutes state has a product and we can have a separate conversation but once you do that then that's the point in the conversation that cio says well how do i even manage the cost of operation if i decentralize you know building and sharing data to my technical teams to my application teams do i need to go and hire another hundred data engineers and i think that's the role of a self-serve data platform in a way that it enables and empowers generalist technologies that we already have in the technical domains the the majority population of our developers these days right so the data platform attempts to mobilize the generalist technologies to become data producers to become data consumers and really rethink what tools these people need um and the last last principle so data platform is really to giving autonomy to domain teams and empowering them and reducing the cost of ownership of the data products and finally as you mentioned the question around how do i still assure that these different data products are interoperable are secure you know respecting privacy now in a decentralized fashion right when we are respecting the sovereignty or the domain ownership of um each domain and that leads to uh this idea of both from operational model um you know applying some sort of a federation where the domain owners are accountable for interoperability of their data product they have incentives that are aligned with global harmony of the data mesh as well as from the technology perspective thinking about this data is a product with a new lens with a lens that all of those policies that need to be respected by these data products such as privacy such as confidentiality can we encode these policies as computational executable units and encode them in every data product so that um we get automation we get governance through automation so that's uh those that's the relationship the complex relationship between the four principles yeah thank you for that i mean it's just a couple of points there's so many important points in there but the idea of the silos and the data as a product sort of breaking down those cells because if you have a product and you want to sell more of it you make it discoverable and you know as a p l manager you put it out there you want to share it as opposed to hide it and then you know this idea of managing the cost you know number three where people say well centralize and and you can be more efficient but that but that essentially was the the failure in your other point related point is generalist versus specialist that's kind of one of the failures of hadoop was you had these hyper specialist roles emerge and and so you couldn't scale and so let's talk about the goals of data mesh for a moment you've said that the objective is to extend exchange you call it a new unit of value between data producers and data consumers and that unit of value is a data product and you've stated that a goal is to lower the cognitive load on our brains i love this and simplify the way in which data are presented to both producers and consumers and doing so in a self-serve manner that eliminates the tapping on the shoulders or emails or raising tickets so how you know i'm trying to understand how data should be used etc so please explain why this is so important and how you've seen organizations reduce the friction across the data flows and the interconnectedness of things like data products across the company yeah i mean this is important um as you mentioned you know initially when this whole idea of a data-driven innovation came to exist and we needed all sorts of you know technology stacks we we centralized um creation of the data and usage of the data and that's okay when you first get started with where the expertise and knowledge is not yet diffused and it's only you know the privilege of a very few people in the organization but as we move to a data driven um you know innovation cycle in the organization as we learn how data can unlock new new programs new models of experience new products then it's really really important as you mentioned to get the consumers and producers talk to each other directly without a broker in the middle because even though that having that centralized broker could be a cost-effective model but if you if we include uh the cost of missed opportunity for something that we could have innovated well we missed that opportunity because of months of looking for the right data then that cost parented the cost benefit parameters and formula changes so um so to to have that innovation um really embedded data-driven innovation embedded into every domain every team we need to enable a model where the producer can directly peer-to-peer discover the data uh use it understand it and use it so the litmus test for that would be going from you know a hypothesis that you know as a data scientist i think there is a pattern and there is an insight in um you know in in the customer behavior that if i have access to all of the different informations about the customer all of the different touch points i might be able to discover that pattern and personalize the experience of my customer the liquid stuff is going from that hypothesis to finding all of the different sources be able to understanding and be able to connect them um and then turn them them into you know training of my machine learning and and the rest is i guess known as an intelligent product got it thank you so i i you know a lot of what we do here in breaking it house is we try to curate and then point people to new resources so we will have some additional resources because this this is not superficial uh what you and your colleagues in the community are creating but but so i do want to you know curate some of the other material that you had so if i bring up this next chart the left-hand side is a curated description both sides of your observations of most of the monolithic data platforms they're optimized for control they serve a centralized team that has hyper-specialized roles as we talked about the operational stacks are running running enterprise software they're on kubernetes and the microservices are isolated from let's say the spark clusters you know which are managing the analytical data etc whereas the data mesh proposes much greater autonomy and the management of code and data pipelines and policy as independent entities versus a single unit and you've made this the point that we have to enable generalists to borrow from so many other examples in the in the industry so it's an architecture based on decentralized thinking that can really be applied to any domain really domain agnostic in a way yes and i think if i pick one key point from that diagram is really um or that comparison is the um the the the data platforms or the the platform capabilities need to present a continuous experience from an application developer building an application that generates some data let's say i have an e-commerce application that generates some data to the data product that now presents and shares that data as as temporal immutable facts that can be used for analytics to the data scientist that uses that data to personalize the experience to the deployment of that ml model now back to that e-commerce application so if we really look at this continuous journey um the walls between these separate platforms that we have built needs to come down the platforms underneath that generate you know that support the operational systems versus supported data platforms versus supporting the ml models they need to kind of play really nicely together because as a user i'll probably fall off the cliff every time i go through these stages of this value stream um so then the interoperability of our data solutions and operational solutions need to increase drastically because so far we've got away with running operational systems an application on one end of the organization running and data analytics in another and build a spaghetti pipeline to you know connect them together neither of the ends are happy i hear from data scientists you know data analyst pointing finger at the application developer saying you're not developing your database the right way and application point dipping you're saying my database is for running my application it wasn't designed for sharing analytical data so so we've got to really what data mesh as a mesh tries to do is bring these two world together closer because and then the platform itself has to come closer and turn into a continuous set of you know services and capabilities as opposed to this disjointed big you know isolated stacks very powerful observations there so we want to dig a little bit deeper into the platform uh jamar can have you explain your thinking here because it's everybody always goes to the platform what do i do with the infrastructure what do i do so you've stressed the importance of interfaces the entries to and the exits from the platform and you've said you use a particular parlance to describe it and and this chart kind of shows what you call the planes not layers the planes of the platform it's complicated with a lot of connection points so please explain these planes and how they fit together sure i mean there was a really good point that you started with that um when we think about capabilities or that enables build of application builds of our data products build their analytical solutions usually we jump too quickly to the deep end of the actual implementation of these technologies right do i need to go buy a data catalog or do i need you know some sort of a warehouse storage and what i'm trying to kind of elevate us up and out is to to to force us to think about interfaces and apis the experiences that the platform needs to provide to run this secure safe trustworthy you know performance mesh of data products and if you focus on then the interfaces the implementation underneath can swap out right you can you can swap one for the other over time so that's the purpose of like having those lollipops and focusing and emphasizing okay what is the interface that provides a certain capability like the storage like the data product life cycle management and so on the purpose of the planes the mesh experience playing data product expense utility plan is really giving us a language to classify different set of interfaces and capabilities that play nicely together to provide that cohesive journey of a data product developer data consumer so then the three planes are really around okay at the bottom layer we have a lot of utilities we have that mad mac turks you know kind of mad data tooling chart so we have a lot of utilities right now they they manage workflow management you know they they do um data processing you've got your spark link you've got your storage you've got your lake storage you've got your um time series of storage you've got a lot of tooling at that level but the layer that we kind of need to imagine and build today we don't buy yet as as long as i know is this linger that allows us to uh exchange that um unit of value right to build and manage these data products so so the language and the apis and interface of this product data product experience plan is not oh i need this storage or i need that you know workflow processing is that i have a data product it needs to deliver certain types of data so i need to be able to model my data it needs to as part of this data product i need to write some processing code that keeps this data constantly alive because it's receiving you know upstream let's say user interactions with a website and generating the profile of my user so i need to be able to to write that i need to serve the data i need to keep the data alive and i need to provide a set of slos and guarantees for my data so that good documentation so that some you know someone who comes to data product knows but what's the cadence of refresh what's the retention of the data and a lot of other slos that i need to provide and finally i need to be able to enforce and guarantee certain policies in terms of access control privacy encryption and so on so as a data product developer i just work with this unit a complete autonomous self-contained unit um and the platform should give me ways of provisioning this unit and testing this unit and so on that's why kind of i emphasize on the experience and of course we're not dealing with one or two data product we're dealing with a mesh of data products so at the kind of mesh level experience we need a set of capabilities and interfaces to be able to search the mesh for the right data to be able to explore the knowledge graph that emerges from this interconnection of data products need to be able to observe the mesh for any anomalies did we create one of these giant master data products that all the data goes into and all the data comes out of how we found ourselves the bottlenecks to be able to kind of do those level machine level capabilities we need to have a certain level of apis and interfaces and once we decide and decide what constitutes that to satisfy this mesh experience then we can step back and say okay now what sort of a tool do i need to build or buy to satisfy them and that's that is not what the data community or data part of our organizations used to i think traditionally we're very comfortable with buying a tool and then changing the way we work to serve to serve the tool and this is slightly inverse to that model that we might be comfortable with right and pragmatists will will to tell you people who've implemented data match they'll tell you they spent a lot of time on figuring out data as a product and the definitions there the organizational the getting getting domain experts to actually own the data and and that's and and they will tell you look the technology will come and go and so to your point if you have those lollipops and those interfaces you'll be able to evolve because we know one thing's for sure in this business technology is going to change um so you you had some practical advice um and i wanted to discuss that for those that are thinking about data mesh i scraped this slide from your presentation that you made and and by the way we'll put links in there your colleague emily who i believe is a data scientist had some really great points there as well that that practitioners should dig into but you made a couple of points that i'd like you to summarize and to me that you know the big takeaway was it's not a one and done this is not a 60-day project it's a it's a journey and i know that's kind of cliche but it's so very true here yes um this was a few starting points for um people who are embarking on building or buying the platform that enables the people enables the mesh creation so it was it was a bit of a focus on kind of the platform angle and i think the first one is what we just discussed you know instead of thinking about mechanisms that you're building think about the experiences that you're enabling uh identify who are the people like what are the what is the persona of data scientists i mean data scientist has a wide range of personas or did a product developer the same what is the persona i need to develop today or enable empower today what skill sets do they have and and so think about experience as mechanisms i think we are at this really magical point i mean how many times in our lifetime we come across a complete blanks you know kind of white space to a degree to innovate so so let's take that opportunity and use a bit of a creativity while being pragmatic of course we need solutions today or yesterday but but still think about the experiences not not mechanisms that you need to buy so that was kind of the first step and and the nice thing about that is that there is an evolutionary there is an iterative path to maturity of your data mesh i mean if you start with thinking about okay which are the initial use cases i need to enable what are the data products that those use cases depend on that we need to unlock and what is the persona of my or general skill set of my data product developer what are the interfaces i need to enable you can start with the simplest possible platform for your first two use cases and then think about okay the next set of data you know data developers they have a different set of needs maybe today i just enable the sql-like querying of the data tomorrow i enable the data scientists file based access of the data the day after i enable the streaming aspect so so have this evolutionary kind of path ahead of you and don't think that you have to start with building out everything i mean one of the things we've done is taking this harvesting approach that we work collaboratively with those technical cross-functional domains that are building the data products and see how they are using those utilities and harvesting what they are building as the solutions for themselves back into the back into the platform but at the end of the day we have to think about mobilization of the large you know largest population of technologies we have we'd have to think about diffusing the technology and making it available and accessible by the generous technologies that you know and we've come a long way like we've we've gone through these sort of paradigm shifts in terms of mobile development in terms of functional programming in terms of cloud operation it's not that we are we're struggling with learning something new but we have to learn something that works nicely with the rest of the tooling that we have in our you know toolbox right now so so again put that generalist as the uh as one of your center personas not the only person of course we will have specialists of course we will always have data scientists specialists but any problem that can be solved as a general kind of engineering problem and i think there's a lot of aspects of data michigan that can be just a simple engineering problem um let's just approach it that way and then create the tooling um to empower those journalists great thank you so listen i've i've been around a long time and so as an analyst i've seen many waves and we we often say language matters um and so i mean i've seen it with the mainframe language it was different than the pc language it's different than internet different than cloud different than big data et cetera et cetera and so we have to evolve our language and so i was going to throw a couple things out here i often say data is not the new oil because because data doesn't live by the laws of scarcity we're not running out of data but i get the analogy it's powerful it powered the industrial economy but it's it's it's bigger than that what do you what do you feel what do you think when you hear the data is the new oil yeah i don't respond to those data as the gold or oil or whatever scarce resource because as you said it evokes a very different emotion it doesn't evoke the emotion of i want to use this i want to utilize it feels like i need to kind of hide it and collect it and keep it to myself and not share it with anyone it doesn't evoke that emotion of sharing i really do think that data and i with it with a little asterisk and i think the definition of data changes and that's why i keep using the language of data product or data quantum data becomes the um the most important essential element of existence of uh computation what do i mean by that i mean that you know a lot of applications that we have written so far are based on logic imperative logic if this happens do that and else do the other and we're moving to a world where those applications generating data that we then look at and and the data that's generated becomes the source the patterns that we can exploit to build our applications as in you know um curate the weekly playlist for dave every monday based on what he has listened to and the you know other people has listened to based on his you know profile so so we're moving to the world that is not so much about applications using the data necessarily to run their businesses that data is really truly is the foundational building block for the applications of the future and then i think in that we need to rethink the definition of the data and maybe that's for a different conversation but that's that's i really think we have to converge the the processing that the data together the substance substance and the processing together to have a unit that is uh composable reusable trustworthy and that's that's the idea behind the kind of data product as an atomic unit of um what we build from future solutions got it now something else that that i heard you say or read that really struck me because it's another sort of often stated phrase which is data is you know our most valuable asset and and you push back a little bit on that um when you hear people call data and asset people people said often have said they think data should be or will eventually be listed as an asset on the balance sheet and i i in hearing what you said i thought about that i said well you know maybe data as a product that's an income statement thing that's generating revenue or it's cutting costs it's not necessarily because i don't share my my assets with people i don't make them discoverable add some color to this discussion i think so i think it's it's actually interesting you mentioned that because i read the new policy in china that cfos actually have a line item around the data that they capture we don't have to go to the political conversation around authoritarian of um collecting data and the power that that creates and the society that leads to but that aside that big conversation little conversation aside i think you're right i mean the data as an asset generates a different behavior it's um it creates different performance metrics that we would measure i mean before conversation around data mesh came to you know kind of exist we were measuring the success of our data teams by the terabytes of data they were collecting by the thousands of tables that they had you know stamped as golden data none of that leads to necessarily there's no direct line i can see between that and actually the value that data generated but if we invert that so that's why i think it's rather harmful because it leads to the wrong measures metrics to measure for success so if you invert that to a bit of a product thinking or something that you share to delight the experience of users your measures are very different your measures are the the happiness of the user they decrease lead time for them to actually use and get value out of it they're um you know the growth of the population of the users so it evokes a very different uh kind of behavior and success metrics i do say if if i may that i probably come back and regret the choice of word around product one day because of the monetization aspect of it but maybe there is a better word to use but but that's the best i think we can use at this point in time why do you say that jamar because it's too directly related to monetization that has a negative connotation or it might might not apply in things like healthcare or you know i think because if we want to take your shortcuts and i remember this conversation years back that people think that the reason to you know kind of collect data or have data so that we can sell it you know it's just the monetization of the data and we have this idea of the data market places and so on and i think that is actually the least valuable um you know outcome that we can get from thinking about data as a product that direct cell an exchange of data as a monetary you know exchange of value so so i think that might redirect our attention to something that really matters which is um enabling using data for generating ultimately value for people for the customers for the organizations for the partners as opposed to thinking about it as a unit of exchange for for money i love data as a product i think you were your instinct was was right on and i think i'm glad you brought that up because because i think people misunderstood you know in the last decade data as selling data directly but you really what you're talking about is using data as a you know ingredient to actually build a product that has value and value either generate revenue cut costs or help with a mission like it could be saving lives but in some way for a commercial company it's about the bottom line and that's just the way it is so i i love data as a product i think it's going to stick so one of the other things that struck me in one of your webinars was one of the q a one of the questions was can i finally get rid of my data warehouse so i want to talk about the data warehouse the data lake jpmc used that term the data lake which some people don't like i know john furrier my business partner doesn't like that term but the data hub and one of the things i've learned from sort of observing your work is that whether it's a data lake a data warehouse data hub data whatever it's it should be a discoverable node on the mesh it really doesn't matter the the technology what are your your thoughts on that yeah i think the the really shift is from a centralized data warehouse to data warehouse where it fits so i think if you just cross that centralized piece uh we are all in agreement that data warehousing provides you know interesting and capable interesting capabilities that are still required perhaps as a edge node of the mesh that is optimizing for certain queries let's say financial reporting and we still want to direct a fair bit of data into a node that is just for those financial reportings and it requires the precision and the um you know the speed of um operation that the warehouse technology provides so i think um definitely that technology has a place where it falls apart is when you want to have a warehouse to rule you know all of your data and model canonically model your data because um it you have to put so much energy into you know kind of try to harness this model and create this very complex the complex and fragile snowflake schemas and so on that that's all you do you spend energy against the entropy of your organization to try to get your arms around this model and the model is constantly out of step with what's happening in reality because reality the model the reality of the business is moving faster than our ability to model everything into into uh into one you know canonical representation i think that's the one we need to you know challenge not necessarily application of data warehousing on a node i want to close by coming back to the issues of standards um you've specifically envisioned data mesh to be technology agnostic as i said before and of course everyone myself included we're going to run a vendor's technology platform through a data mesh filter the reality is per the matt turc chart we showed earlier there are lots of technologies that that can be nodes within the data mesh or facilitate data sharing or governance etc but there's clearly a lack of standardization i'm sometimes skeptical that the vendor community will drive this but maybe like you know kubernetes you know google or some other internet giant is going to contribute something to open source that addresses this problem but talk a little bit more about your thoughts on standardization what kinds of standards are needed and where do you think they'll come from sure i mean the you write that the vendors are not today incentivized to create those open standards because majority of the vet not all of them but some vendors operational model is about bring your data to my platform and then bring your computation to me uh and all will be great and and that will be great for a portion of the clients and portion of environments where that complexity we're talking about doesn't exist so so we need yes other players perhaps maybe um some of the cloud providers or people that are more incentivized to open um open their platform in a way for data sharing so as a starting point i think standardization around data sharing so if you look at the spectrum right now we have um a de facto sound it's not even a standard for something like sql i mean everybody's bastardized to call and extended it with so many things that i don't even know what this standard sql is anymore but we have that for some form of a querying but beyond that i know for example folks at databricks to start to create some standards around delta sharing and sharing the data in different models so i think data sharing as a concept the same way that apis were about capability sharing so we need to have the data apis or analytical data apis and data sharing extended to go beyond simply sql or languages like that i think we need standards around computational prior policies so this is again something that is formulating in the operational world we have a few standards around how do you articulate access control how do you identify the agents who are trying to access with different authentication mechanism we need to bring some of those our ad our own you know our data specific um articulation of policies uh some something as simple as uh identity management across different technologies it's non-existent so if you want to secure your data across three different technologies there is no common way of saying who's the agent that is acting uh to act to to access the data can i authenticate and authorize them so so those are some of the very basic building blocks and then the gravy on top would be new standards around enriched kind of semantic modeling of the data so we have a common language to describe the semantic of the data in different nodes and then relationship between them we have prior work with rdf and folks that were focused on i guess linking data across the web with the um kind of the data web i guess work that we had in the past we need to revisit those and see their practicality in the enterprise con context so so data modeling a rich language for data semantic modeling and data connectivity most importantly i think those are some of the items on my wish list that's good well we'll do our part to try to keep the standards you know push that push that uh uh movement jamaica we're going to leave it there i'm so grateful to have you uh come on to the cube really appreciate your time it's just always a pleasure you're such a clear thinker so thanks again thank you dave that's it's wonderful to be here now we're going to post a number of links to some of the great work that jamark and her team and her books and so you check that out because we remember we publish each week on siliconangle.com and wikibon.com and these episodes are all available as podcasts wherever you listen listen to just search breaking analysis podcast don't forget to check out etr.plus for all the survey data do keep in touch i'm at d vallante follow jamac d z h a m a k d or you can email me at david.velante at siliconangle.com comment on the linkedin post this is dave vellante for the cube insights powered by etrbwell and we'll see you next time you

Published Date : Oct 25 2021

SUMMARY :

all of the you know wonderful

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
60-dayQUANTITY

0.99+

oneQUANTITY

0.99+

40 percentQUANTITY

0.99+

matt turkPERSON

0.99+

two booksQUANTITY

0.99+

chinaLOCATION

0.99+

thousands of tablesQUANTITY

0.99+

dave vellantePERSON

0.99+

jamaacPERSON

0.99+

googleORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

tomorrowDATE

0.99+

yesterdayDATE

0.99+

octoberDATE

0.99+

bostonLOCATION

0.99+

first stepQUANTITY

0.98+

jamarPERSON

0.98+

todayDATE

0.98+

jamaicaPERSON

0.98+

both sidesQUANTITY

0.98+

shamakPERSON

0.98+

davePERSON

0.98+

jamarkPERSON

0.98+

first oneQUANTITY

0.98+

o'reillyORGANIZATION

0.98+

bothQUANTITY

0.97+

each weekQUANTITY

0.97+

john furrierPERSON

0.97+

second principleQUANTITY

0.97+

jamaak dagani shamakPERSON

0.96+

less than a year agoDATE

0.96+

earlier this yearDATE

0.96+

three different technologiesQUANTITY

0.96+

jamaaPERSON

0.95+

each domainQUANTITY

0.95+

terabytes of dataQUANTITY

0.94+

three planesQUANTITY

0.94+

julyDATE

0.94+

last decadeDATE

0.93+

about 1500 respondentsQUANTITY

0.93+

decadesQUANTITY

0.93+

firstQUANTITY

0.93+

first twoQUANTITY

0.93+

dot worksORGANIZATION

0.93+

one key pointQUANTITY

0.93+

first two use casesQUANTITY

0.92+

last fridayDATE

0.92+

this weekDATE

0.92+

twoQUANTITY

0.92+

three otherQUANTITY

0.92+

ndorORGANIZATION

0.92+

first thingQUANTITY

0.9+

two dataQUANTITY

0.9+

lakeORGANIZATION

0.89+

four areasQUANTITY

0.88+

single toolQUANTITY

0.88+

north americaLOCATION

0.88+

single unitQUANTITY

0.87+

jamacPERSON

0.86+

one ofQUANTITY

0.85+

thingsQUANTITY

0.85+

david.velanteOTHER

0.83+

past eight quartersDATE

0.83+

four principlesQUANTITY

0.82+

daveORGANIZATION

0.82+

a lot of applicationsQUANTITY

0.81+

four main principlesQUANTITY

0.8+

sqlTITLE

0.8+

palo altoORGANIZATION

0.8+

emilyPERSON

0.8+

d vallantePERSON

0.8+

Mark Hinkle | KubeCon + CloudNativeCon NA 2021


 

(upbeat music) >> Greetings from Los Angeles, Lisa Martin here with Dave Nicholson. We are on day three of the caves wall-to-wall coverage of KubeCon CloudNativeCon North America 21. We're pleased to welcome Mark Hinkle to the program, the co-founder and CEO of TriggerMesh. Mark welcome. >> Thank you, It's nice to be here. >> Lisa: Love the name. Very interesting TriggerMesh. Talk to us about what TriggerMesh does and what, when you were founded and what some of the gaps were that you saw in the market. >> Yeah, so TriggerMesh actually the Genesis of the name is in, cloud event, driven architecture. You trigger workloads. So that's the trigger and trigger mesh, and then mesh, we mesh services together, so cloud, so that's why we're called TriggerMesh. So we're a cloud native open source integration platform. And the idea is that, the number of cloud services are proliferating. You still have stuff in your data center that you can't decommission and just wholesale lift and shift to the cloud. So we wanted to provide a platform to create workflows from the data center, to the cloud, from cloud to cloud and not, and use all the cloud native design principles, but not leave your past behind. So that's, what we do. We're, very, we were cloud, we are cloud operators and developers, and we wanted the experience to be very similar to the way that DevOps folks are doing infrastructure code and deploying that we want to make it easy to do integration as code. So we follow the same design patterns, use the same domain languages, some of those tools like Hashi corpse, Terraform, and that that's what we do and how we go about doing it. >> Lisa: And when were you guys founded? >> September, 2018. >> Oh so your young, your three years young. >> Three years it's feels like 21 >> I bet. >> And startup years it's a lot has happened, but yeah, we my co-founder and I were former early cloud folks. We were at cloud.com worked through the OpenStack years and the CloudStack, and we just saw the pattern of, abstraction coming about. So first you abstract the hardware, then you abstract the operating system. And now at with the Kubernetes container, you know, evolution, you're abstracting it up to the application layer and we want it to be able to provide tooling that lets you take full advantage of that. >> Dave: So being founded in 2018, what's your perception of that? The shift that happened during the pandemic in terms of the drive towards cloud adoption and the demands for services like you provide? >> Mark: Yeah, I think it's a mixed blessing. So we, people became more remote. They needed to enable digital transformation. Biggest thing, I think that that for us is, you know, you don't go to the bank anymore. And the banking industry is doing, you know, exponentially more remote, online transactions than in person. And it's very important. So we decided that financial services is where we were going to start with first because they have a lot of legacy architecture. They have a lot of need to move to the cloud to have better digital experiences. And we wanted to enable them to, you know, keep their mainframes online while they were still doing cutting edge, you know, mobile applications, that kind of thing. >> Lisa: And of course the legacy institutions like the BFA's the Wells Fargo, they're competing with the fintechs who are much more nimble, much more agile and able to sort of disrupt the financial services industry. Was that part of also your decision to start in financial services? >> It was a little bit of luck because we started with our network and it turned out the, you know, we saw, we started talking to our friends early on, cause we're a startup and said, this is what we're going to do. And where it really resonated was PNC bank was our, one of our first customers. You know, another financial regulatory company was another one, a couple of banks in Europe. And we, you know, as we started talking about what we were doing, that we just gravitated there because they had the, the biggest need, even though everybody has the need, their businesses are, you know, critically tied to digital transformation. >> So starting with financial services. >> It's, it's counter intuitive, isn't it? >> It was counterintuitive, but it lends credibility to any other industry vertical that you're going to approach. >> Yeah, yeah it does. It's a, it's a great, they're going to be our hardest customers and they have more at stake than a lot of like transactions are millions and millions of dollars per hour for these folks. So they don't want to play around, they, they have no tolerance for failure. So it's a good start, but it's sort of like taking up jogging and running a marathon in your first week. It's very very grilling in that sense, but it really has made us a lot better and gave us a lot of insight into the kinds of things we need to do from not just functionality, but security and that kind of thing. >> Where are you finding these customers with respect to adoption of Kubernetes? Are they leading? Are they knowing we've got to get there eventually from an infrastructure perspective? >> So the interesting thing is Kubernetes is a platform for us to deliver on, so we, we don't require you to be a Kubernetes expert we offer it as a SaaS, but what happens is that the Kubernetes folks are the ones that we end up really engaging with earlier on. And I think that we find that they're in this phase of they're containerizing their apps, that's the first step. And then they're putting them on Kubernetes and then their next step is a security and integration path. So once she, I think they call it and this is my buzzword of the show day two operations, right? So they, they get to day two and then they have a security and an integration concern before they go live. So they want to be able to make sure that they don't increase their attack face. And then they also want to make sure that this newly deployed containerized infrastructure is as well integrated as the previous, you know, virtualized or even, you know, on the server infrastructure that they had before. >> So TriggerMesh, doesn't solely work in the containerized world, you're, you're sort of you're bridging the divide. >> Mark: Yes. >> What percentage of the workloads that you're seeing are the result of modernization migration, as opposed to standing up net new application environments in Kubernetes? Do you have a sense for that? >> I think we live in a lot in the brown field. So, you know, folks that have an existing project that they're trying to bridge to it versus the Greenfield kind of, you know, the, the huge wins that you saw in the early cloud days of the Netflix and the Twitter's Dwayne scale. Now we're talking to the enterprises who have, you know, they have existing concerns. So I would say that it's, it's mostly people that are, you know, very few net new projects, unless it's a modernization and they're getting ready to decommission an old one, which is. >> Dave: So Brownfield financial services. You just said, you know, let's just, let's just go after that. >> You know, yeah. I mean, we had this dart forward and we put up buzzwords, but no, it was, it was actually just, and you know, we're still finding our way as far as early on where we're open source folks. And we did not open source from day one, which is very weird when everybody's new, your identity is, you know, I worked, I was the VP of marketing for Linux foundation and no JS and all these open source projects. And my co-founder and I are Apache committers. And our project wasn't open yet because we had to get to the point where it could be open and people could be productive in the use and contribution. And we had to staff up engineers. And now I think this week we open-sourced our entire platform. And I think that's going to open up, you know, that's where we started because it was not necessarily the lowest hanging fruit, but the profitable, less profitable, lowest hanging fruit was financial services. Now we are letting our code out into the wild. And I think it'll be interesting to see what comes back. >> So you just announced that this week TriggerMesh integration platform as an open source project here at KubeCon, what's been some of the feedback? >> It's all been positive. I haven't heard anything negative. We did it, so we're very, very, there's a very, the culture around open source is very tough. It's very critical if you don't do it right. So I think we did a good job, we used enough, we used a OSI approved. They've been sourced, licensed the Apache software, a V2 license. We hired someone who was well-respected in the DevREL world from a chef who understands the DevOps sort of culture methodologies. We staffed up our engineers who are going to be helping the free and open source users. So they're successful and we're betting that that will yield business results down the road. >> Lisa: And what are the two I see on your website, two primary use cases that you guys support. Can you dig into details on that? >> So the first one is sort of a workflow automation and a really simple example of that is you have a, something that happens in one cloud. So for example, you take a picture on your phone and you upload it and it goes to Amazon and there is a service that wants to identify what's in that picture. And once you put it on the line and the internship parlance, you could kick off a workflow from TensorFlow, which is artificial intelligence to identify the picture. And there isn't a good way for clouds to communicate from one to the other, without writing custom blue, which is really what, what we're helping to get rid of is there's a lot of blue written to put together cloud native applications. So that's a workflow, you know, triggering a server less function is the workflow. The other thing is actually breaking up data gravity. So I have a warehouse of data, in my data center, and I want to start replicating some portion of that. As it changes to a database as a service, we can based on an event flow, which is passive. We're not, we're not making, having a conversation like you would with an API where there's an event stream. That's like drinking from the fire hose and TriggerMesh is the nozzle. And we can direct that data to a DBaaS. We can direct that data to snowflake. We can direct that data to a cloud-based data lake on Microsoft Azure, or we can split it up, so some events could go to Splunk and all of the events can go to your data lake or some of those, those things can be used to trigger workloads on other systems. And that event driven architecture is really the design pattern of the individual clouds. We're just making it multi-cloud and on-prem. >> Lisa: Do you have a favorite customer example that you think really articulates that the value of that use case? >> Mark: Yeah I think a PNC is probably our, well for the, for the data flow one, I would say we have a regular to Oracle and one of their customers it was their biggest SMB customer of last year. The Oracle cloud is very, very important, but it's not as tool. It doesn't have the same level of tooling as a lot of the other ones. And to, to close that deal, their regulatory customer wanted to use Datadog. So they have hundreds and hundreds of metrics. And what TriggerMesh did was ingest the hundreds and hundreds of metrics and filter them and connect them to Datadog so that, they could, use Datadog to measure, to monitor workloads on Oracle cloud. So that, would be an example of the data flow on the workflow. PNC bank is, is probably our best example and PNC bank. They want to do. I talked about infrastructure code integration is code. They want to do policy as code. So they're very highly regulatory regulated. And what they used to do is they had policies that they applied against all their systems once a month, to determine how much they were in compliance. Well, theoretically if you do that once a month, it could be 30 days before you knew where you were out of compliance. What we did was, we provided them a way to take all of the changes within their systems and for them to a server less cluster. And they codified all of these policies into server less functions and TriggerMesh is triggering their policies as code. So upon change, they're getting almost real-time updates on whether or not they're in compliance or not. And that's a huge thing. And they're going to, they have, within their first division, we worked with, you know, tens of policies throughout PNC. They have thousands of policies. And so that's really going to revolutionize what they're able to do as far as compliance. And that's a huge use case across the whole banking system. >> That's also a huge business outcome. >> Yes. >> So Mark, where can folks go to learn more about TriggerMesh, maybe even read about more specifically about the announcement that you made this week. >> TriggerMesh.com is the best way to get an overview. The open source project is get hub.com/triggermesh/trigger mesh. >> Awesome Mark, thank you for joining Dave and me talking to us about TriggerMesh, what you guys are doing. The use cases that you're enabling customers. We appreciate your time and we wish you best of luck as you continue to forge into financial services and other industries. >> Thanks, it was great to be here. >> All right. For Dave Nicholson, I'm Lisa Martin coming to you live from Los Angeles at KubeCon and CloudNativeCon North America 21, stick around Dave and I, will be right back with our next guest.

Published Date : Oct 15 2021

SUMMARY :

the co-founder and CEO of TriggerMesh. Talk to us about what the data center, to the cloud, Oh so your young, So first you abstract the hardware, I think that that for us is, you know, like the BFA's the And we, you know, but it lends credibility to any So they don't want to play around, as the previous, you know, the containerized world, it's mostly people that are, you know, You just said, you know, to open up, you know, So I think we did a good that you guys support. So that's a workflow, you know, we worked with, you know, announcement that you made this week. TriggerMesh.com is the and me talking to us about you live from Los Angeles at

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark HinklePERSON

0.99+

Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

Lisa MartinPERSON

0.99+

PNCORGANIZATION

0.99+

EuropeLOCATION

0.99+

2018DATE

0.99+

LisaPERSON

0.99+

September, 2018DATE

0.99+

MarkPERSON

0.99+

Los AngelesLOCATION

0.99+

Wells FargoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

three yearsQUANTITY

0.99+

OracleORGANIZATION

0.99+

hundredsQUANTITY

0.99+

BFAORGANIZATION

0.99+

millionsQUANTITY

0.99+

NetflixORGANIZATION

0.99+

first divisionQUANTITY

0.99+

Three yearsQUANTITY

0.99+

twoQUANTITY

0.99+

TwitterORGANIZATION

0.99+

first stepQUANTITY

0.99+

last yearDATE

0.99+

MicrosoftORGANIZATION

0.99+

KubeConEVENT

0.99+

30 daysQUANTITY

0.99+

TriggerMeshORGANIZATION

0.98+

this weekDATE

0.98+

CloudStackTITLE

0.98+

21QUANTITY

0.98+

hub.com/triggermesh/trigger meshOTHER

0.98+

first weekQUANTITY

0.98+

KubeConORGANIZATION

0.98+

CloudNativeCon North America 21EVENT

0.97+

LinuxORGANIZATION

0.97+

once a monthQUANTITY

0.97+

ApacheORGANIZATION

0.97+

firstQUANTITY

0.96+

first customersQUANTITY

0.96+

tens of policiesQUANTITY

0.96+

two primary use casesQUANTITY

0.96+

oneQUANTITY

0.95+

first oneQUANTITY

0.95+

thousands of policiesQUANTITY

0.94+

BrownfieldORGANIZATION

0.93+

day twoQUANTITY

0.92+

day threeQUANTITY

0.92+

one cloudQUANTITY

0.91+

Hashi corpseTITLE

0.91+

day twoQUANTITY

0.9+

OpenStackTITLE

0.88+

PNC bankORGANIZATION

0.87+

hundreds of metricsQUANTITY

0.87+

TensorFlowORGANIZATION

0.86+

CloudNativeCon NA 2021EVENT

0.85+

TerraformTITLE

0.83+

KubeCon CloudNativeCon North America 21EVENT

0.82+

KubernetesTITLE

0.81+

pandemicEVENT

0.81+

hundreds andQUANTITY

0.8+

cloud.comORGANIZATION

0.79+

DevOpsTITLE

0.75+

GreenfieldORGANIZATION

0.74+

Harry Dewhirst, Linksys | Fortinet Security Summit 2021


 

>>From around the globe. It's the cube covering Fortinet security summit brought to you by Fortinet. >>Welcome back to Napa Lisa Martin here at the 40, that championship security summit. I'm pleased to welcome the CEO of links us who joins me next. Harry do Hurst, Harry, welcome to the program. Great to you're here we are in an in-person event. One, which is fantastic. Two we're outdoors, three we're in Napa. >>What's not to love. >>There's nothing, nothing not to love. So you had a session this morning. Talk to me about some of the things that you shared with attendees. >>So the session was, was talking about hybrid work and really the how to make that successful. And, you know, we, as a business have really focused making it, not just work for companies, but for companies to thrive and to really embrace, um, the hybrid work and, and, and extract the Mo the most benefit from it. So we, we spoke about the challenges that, that, that, uh, that has, and some of the solutions to, uh, to solving those challenges. >>Tell me about some of the solutions I'm very familiar with as someone who has been working from home for 18 months, some of the challenges I know, understand it too, from an enterprise security perspective, but what are some of the solutions that links us CS? >>So the solutions are fall into kind of three main categories. The first is of course having the best and latest wireless technologies. So that's wifi six wifi, um, it's of course, needs to be coupled with having a good pipe into your home, or all leveraging 5g and other wireless technologies to have, have great connectivity, then having mesh networking to enable it to be wall-to-wall coverage, seamless roaming between, between all the devices to mean that your, your network infrastructure within the home is very robust. Th th the second kind of pillar of, of, of solution is, is around. Now, you can bring enterprise grade security into the home. Typically it would sit in server cupboards in, in, in, in offices and now, um, with, with us and fortunate, we've created a product which brings that enterprise grade technology for the first time into the, into the home. So it managers no longer have to, um, compromise when it comes to security and they can apply the same policies that they would be doing in an office of 10,000 people to 10,000 offices that are in individual's homes. And, and that's a kind of a first, first world first, I would say, but, um, is going to be critical. And again, it, it, it's about moving from it's good enough to let's make it amazing. Um, and let's not compromise on something as critical as security and safety. >>Absolutely. We know we've spoken a lot with 40 net today and over the last year and a half about the massive changes to the threat landscape, the expansion of it, especially with this pivot, when suddenly there were all of these devices, personal devices on home networks, corporate devices on home networks, it's really changed, not just the threat landscape, but also what enterprises need to do. You guys, you mentioned this new announcement came out yesterday, the Linx has homework solution powered by Fordanet talk to us about that, the Genesis of it, and what we're enterprises can actually get access to this. >>Sure. So, so yeah, this is a product that really it's been a meeting of minds. You know, lynxes, lynxes are a leader and have been a leader since the very beginning of wireless. And, and we are, you know, a leader today. Um, Fortnite of course, we're a leader in enterprise security. So the two combined providing the best in class, uh, home internet experience coupled with, um, the, the security, which can be managed by the business. So when as a, as a, as an end user, as a, as a, as an employee, when I plug in this equipment, it automatically phones home to, to, to, to link LyncSys. And then in turn to force net, we know that it's Harriet LyncSys, that that has been been plugged in. It will spin up a network for me, personally, and my family to use in the home. So the, the benefit to the, to the, to the consumer is that there's a fantastic wifi, six mesh solution throughout their home, which is most likely a significant upgrade on their Verizon equipment or whatever it might be. Um, and it's been spins up a corporate network and that corporate network for all intensive purposes is, is imitating exactly like if you were sitting at your desk in the office, in the corporate office. So it becomes an extension of the corporate network. Um, and as I say, it sits behind, behind the FortiGate. >>Talk to me about the Genesis of the solution. Was it the pandemic, because lynxes has seen the challenges from the consumer centric point of view. Talk to me about really kind of the catalyst for these two powerhouses coming together. >>So it was actually something that we were working on three pandemic and fortunate work. We're, we're, we're also looking at how to support the remote work because remote work is not like totally new, this, this pandemic has rapidly accelerated it, but, um, there was already a market and growing, this has just accelerated it. So both businesses independently of one another, where we're kind of toying with it. So when, when we then kind of came together, it was, it was a no brainer. And there was a kind of light bulb moment. And, and we, we realized that the combined solution with the two businesses and bringing together the expertise from both was really, would be how, how we would succeed. >>Do you see any in the last, I know it was just announced yesterday, but any, any industries in particular that you think are really like low-hanging fruit for this type of technology? >>I mean, I think finance is of course, um, you know, there's the high stakes poker in, in that industry. So, um, same goes for healthcare, um, and, and, and even education. So ones that where security is paramount of, and of course security is paramount everywhere, but those ones in particular, given the nature of, of the, those industries. So, so we really expect to see banking, finance, healthcare, uh, pharma, as, as key verticals that we would, uh, we would expect to be successful. >>Okay, excellent. Well, one of the challenges with the ransomware increases, the 40 net threat landscape report showed it's nearly up 11% in the last 12 months. Of course, we have that rapid pivot to work from home 18 months ago, and ransomware and phishing and, and techniques and social engineering getting so much more sophisticated and personalized. Now you've got someone working from home who probably has a million distractions, kids, spouses, et cetera. So easy to click on a link that for most of it looks very legitimate. So having a solution like this in place is really critical for >>Absolutely. And, and I think, you know, until those vulnerabilities are sealed, you know, the attacks will continue. And this solution is part of the, the, the soul for that. Because as soon as, as soon as these, these holes in the bucket of a tape shut, um, you know, the, the appetite to, to invest time in, in attacks, we'll, we'll, we'll fade, >>Hopefully that's the direction that we need to see it going, right. Not up until the right down. Talk to me about, so you mentioned from the it perspective, I'm looking for the benefits for an enterprise, it organization, centralized visibility, they can see in terms of productivity. I imagine it's much better for the end user, but give me that kind of it business perspective, how does this help them come together? >>So for all intents and purposes, the it manager will see within their, their fortunate, uh, interface, these devices, these links devices in people's homes, just in the same way that they would see 40 gates in their office in New York or their office in Pittsburgh. So, um, you know, it really is this, there were 15,000 people in five offices. There's now 15,000 people in 15,000 offices, and, but they can push and manage an and, and push those security, um, policies seamlessly down to all 15,000. They can categorize them. They can, they can do fall intensive purposes. Those, those employees are sitting in the, in one of their facilities. And, and that's really the, the bar that I believe companies should be holding themselves to because, um, it, it provides security for the company. It provides security for the employee, and of course, then by them being able to connect efficiently and secure securely and with great speed and no interruption, that's good for productivity, which is good for the company's profitability. >>Absolutely. It's all interconnected. And this is tuned for video conferencing. Is that >>Yes. So, so we've actually partnered with, um, both zoom and teams, Microsoft teams to, um, we've done an integration with them whereby we're able to identify and optimize that traffic within the network. So, so that adds an added benefit to, to users of those services. And we'll, we'll, we'll be rolling out further, um, partnerships with other key, um, utilities that enable that to optimization to, to, to help it be streamlined. >>So prioritize zoom and teams for the parents kick the kids >>Off. I mean, we've all experienced. The apple TV gets fired up, zoom goes down or, or fought for fortnight, uh, gaming sessions cause you know, havoc within the home. So it it's that application prioritization and optimization that, that I think will also really benefit, um, companies and the employees. The, the frustration is immense. >>I agree I've experienced some of that, but what you're really doing is providing a very secure lifeline that the enterprise needs, the employee needs. It, it's all tied together, productive employees, that our customer experience that our products and services it's, it's really these days, especially considering we don't know how much longer this is going to persist. We expect that there will be some amount of hybrid that will probably be permanent, but that's a lifeline. >>Yes, no, absolutely. I think to your point around the permanence of this, you know, of course we're not all going to be hermits and leave live at home forever, but that, you know, I think this has opened both companies and individuals eyes to what's possible. And I think if you implement these, these types of measures, then you you're setting it up for success. And, and, um, you know, I believe that the solution that we've launched is, is a part of the, the, the piece of the puzzle. >>Maybe the acceleration of it had a bit of a silver lining from what we've all experienced in the last 18 months. Yes. Yes. Talk to me about some of the comments and the feedback that you got from your session this morning. I'm sure people are very excited to hear about what you're doing. >>Yeah. I mean, since, since the announcement came out yesterday, there's been, there's been certainly a lot of interests in appetite. Um, and yeah, we're super excited about the reception it's received. Um, I think that a lot of people that are like, oh, wow, of course, why, why wouldn't this exist already? Um, and, and when you look at it like that, it kind of is obvious, but it, you know, no one expected of course the pandemic and therefore the, no one was ready for it and it's taken us a year or so to, to get a product that's, that's, that's viable and ready and going to be going to be really, really, um, a great utility for companies, but there really was nothing else out there. >>It is surprising in a sense, but then you're right. No one was prepared for the pandemic. We didn't see it coming. And we didn't think that this was a situation that we were going to have to prepare for, let alone live for as long as, as TBD, long as we have. >>Yeah, no, absolutely. That's um, I think it caught everyone by surprise. I think maybe if, if it had happened several years later than the hybrid work movement had started, it was in its infancy. It got very, very quickly ramped up to adulthood. >>I definitely >>Did. So, uh, so great news, very exciting. What you guys are doing with 49. I'm sure that there's going to be great customer feedback. We'll be excited to watch what happens as it gets deployed and rolled out and see how this really transforms the enterprise experience, the employee experience. And I imagine this is a great differentiator for links us business. No. Um, I think it's, it's a really exciting next chapter of, of our, of our history. You know, we've been around for 30 plus years and, um, I think this is, this is a real step change in, in, in where we're focused and I'm super excited about the future. >>I like that change in the future. Well, here we are in beautiful Napa. You said you're not a golfer, but your wife has, >>My wife is golfing. I I'm going to be keeping very many fingers crossed tomorrow during the program for this, for the safety of the spectators. >>That's awesome that she's in the program and here you are settled with all these meetings and all those >>Things. >>Exactly. Well, Harry, it's been a pleasure talking to you. Thank you for joining me on the program, explaining the links as homework solution powered by 49 and all the great things that are going to come from that. Thank you for Harry. Do Hurst. I'm Lisa Martin. You're watching the cube and Napa at the 40 minute security championship.

Published Date : Sep 14 2021

SUMMARY :

security summit brought to you by Fortinet. Welcome back to Napa Lisa Martin here at the 40, that championship security summit. Talk to me about some of the things that and some of the solutions to, uh, to solving those challenges. coverage, seamless roaming between, between all the devices to mean that a half about the massive changes to the threat landscape, the expansion of it, So it becomes an extension of the corporate network. Talk to me about the Genesis of the solution. So it was actually something that we were working on three pandemic and fortunate work. I mean, I think finance is of course, um, you know, there's the high So easy to click on a link that for most of it looks very legitimate. of a tape shut, um, you know, the, the appetite to, Talk to me about, so you mentioned from the it perspective, I'm looking for the benefits for an enterprise, It provides security for the employee, and of course, then by them being able to connect And this is tuned for video conferencing. to optimization to, to, to help it be streamlined. So it it's that application prioritization the enterprise needs, the employee needs. and, um, you know, I believe that the solution that we've launched is, is a part of the, the, Talk to me about some of the comments and the feedback you know, no one expected of course the pandemic and therefore the, And we didn't think that this was a situation that we were going to have to prepare for, I think maybe if, if it had happened several years later than the hybrid I'm sure that there's going to be great customer feedback. I like that change in the future. I I'm going to be keeping very many fingers crossed tomorrow during the program powered by 49 and all the great things that are going to come from that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

HarryPERSON

0.99+

MicrosoftORGANIZATION

0.99+

PittsburghLOCATION

0.99+

New YorkLOCATION

0.99+

VerizonORGANIZATION

0.99+

NapaLOCATION

0.99+

five officesQUANTITY

0.99+

15,000 peopleQUANTITY

0.99+

15,000 peopleQUANTITY

0.99+

15,000 officesQUANTITY

0.99+

18 monthsQUANTITY

0.99+

yesterdayDATE

0.99+

two businessesQUANTITY

0.99+

lynxesORGANIZATION

0.99+

10,000 officesQUANTITY

0.99+

TwoQUANTITY

0.99+

tomorrowDATE

0.99+

a yearQUANTITY

0.99+

15,000QUANTITY

0.99+

Harry DewhirstPERSON

0.99+

bothQUANTITY

0.99+

twoQUANTITY

0.99+

FortinetORGANIZATION

0.99+

30 plus yearsQUANTITY

0.99+

10,000 peopleQUANTITY

0.99+

LinxORGANIZATION

0.99+

OneQUANTITY

0.99+

threeQUANTITY

0.99+

first timeQUANTITY

0.99+

40 gatesQUANTITY

0.98+

both businessesQUANTITY

0.98+

firstQUANTITY

0.98+

40 minuteQUANTITY

0.98+

six meshQUANTITY

0.98+

todayDATE

0.97+

this morningDATE

0.97+

both companiesQUANTITY

0.96+

18 months agoDATE

0.96+

two powerhousesQUANTITY

0.96+

LinksysORGANIZATION

0.95+

several years laterDATE

0.95+

second kindQUANTITY

0.94+

LyncSysTITLE

0.93+

oneQUANTITY

0.92+

pandemicEVENT

0.92+

40 netORGANIZATION

0.91+

40 net threatQUANTITY

0.88+

last 18 monthsDATE

0.88+

FortiGateORGANIZATION

0.86+

40EVENT

0.86+

three main categoriesQUANTITY

0.83+

11%QUANTITY

0.83+

last 12 monthsDATE

0.83+

Fortinet Security Summit 2021EVENT

0.82+

5gOTHER

0.8+

apple TVCOMMERCIAL_ITEM

0.79+

last year and a halfDATE

0.76+

Fortinet security summitEVENT

0.73+

49QUANTITY

0.72+

FordanetORGANIZATION

0.61+

HarrietPERSON

0.47+

FortniteORGANIZATION

0.46+

million distractionsQUANTITY

0.44+

Breaking Analysis: Chaos Creates Cash for Criminals & Cyber Companies


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante the pandemic not only accelerated the shift to digital but also highlighted a rush of cyber criminal sophistication collaboration and chaotic responses by virtually every major company in the planet the solar winds hack exposed supply chain weaknesses and so-called island hopping techniques that are exceedingly difficult to detect moreover the will and aggressiveness of well-organized cyber criminals has elevated to the point where incident responses are now met with counterattacks designed to both punish and extract money from victims via ransomware and other criminal activities the only upshot is the cyber security market remains one of the most enduring and attractive investment sectors for those that can figure out where the market is headed and which firms are best positioned to capitalize hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll provide our quarterly update of the security industry and share new survey data from etr and thecube community that will help you navigate through the maze of corporate cyber warfare we'll also share our thoughts on the game of 3d chest that octa ceo todd mckinnon is playing against the market now we all know this market is complicated fragmented and fast moving and this next chart says it all it's an interactive graphic from optiv a denver colorado based si that's focused on cyber security they've done some really excellent research and put together this awesome taxonomy and mapped vendor names therein and this helps users navigate the complex security landscape and there are over a dozen major sectors high-level sectors within the security taxonomy in nearly 60 sub-sectors from monitoring vulnerability assessment identity asset management firewalls automation cloud data center sim threat detection and intelligent endpoint network and so on and so on and so on but this is a terrific resource and can help you understand where players fit and help you connect the dots in the space now let's talk about what's going on in the market the dynamics in this crazy mess of a landscape are really confusing sometimes now since the beginning of cyber time we've talked about the increasing sophistication of the adversary and the back and forth escalation between good and evil and unfortunately this trend is unlikely to stop here's some data from carbon black's annual modern bank heist report this is the fourth and of course now vmware's brand highlights the carbon black study since the acquisition and it catalyzed the creation of vmware's cloud security division destructive malware attacks according to the recent study are up 118 percent from last year now one major takeaway from the report is that hackers aren't just conducting wire fraud they are 57 of the bank surveyed saw an increase in wire fraud but the cyber criminals are also targeting non-public information such as future trading strategies this allows the bad guys to front run large block trades and profit it's become very lucrative practice now the prevalence of so-called island hopping is up 38 from already elevated levels this is where a virus enters a company's supply chain via a partner and then often connects with other stealthy malware downstream these techniques are more common where the malware will actually self-form with other infected parts of the supply chain and create actions with different signatures designed to identify and exfiltrate valuable information it's a really complex problem of major concern is that 63 of banking respondents in the study reported that responses to incidents were then met with retaliation designed to intimidate or initiate ransomware attacks to extract a final pound of flesh from the victim notably the study found that 75 percent of csos reported to the cio which many feel is not the right regime the study called for a rethinking of the right cyber regime where the cso has increased responsibility in a direct reporting line to the ceo or perhaps the co with greater exposure to boards of directors so many thanks to vmware and tom kellerman specifically for sharing this information with us this past week great work by your team now some of the themes that we've been talking about for several quarters are shown in the lower half of the chart cloud of course is the big driver thanks to work from home and the pandemic to pandemic and the interesting corollary of course is we see a rapid rethinking of endpoint and identity access management and the concept of zero trust in a recent esg survey two-thirds of respondents said that their use of cloud computing necessitated a change in how they approach identity access management now as shown in the chart from optiv the market remains highly fragmented and m a is of course way up now based on our research it looks like transaction volume has increased more than 40 percent just in the last five months so let's dig into the m a the merger and acquisition trends for just a moment we took a five month snapshot and we were able to count about 80 deals that were completed in that time frame those transactions represented more than 20 billion dollars in value some of the larger ones are highlighted here the biggest of course being the toma bravo taking proof point private for a 12 plus billion dollar price tag the stock went from the low 130s and is trading in the low 170s based on 176 dollar per share offer so there's your arbitrage folks go for it perhaps the more interesting acquisition was auth 0 by octa for 6.5 billion which we're going to talk about more in a moment there's more private equity action we saw as insight bought armis and iot security play and cisco shelled out 730 million dollars for imi mobile which is more of an adjacency to cyber but it's going to go under cisco's security and applications business run by g2 patel but these are just the tip of the iceberg some of the themes that we see connecting the dots of these acquisitions are first sis like accenture atos and wipro are making moves in cyber to go local they're buying secops expertise as i say locally in places like france germany netherlands canada and australia that last mile that belly-to-belly intimate service israel israeli-based startups chalked up five acquired companies in the space over the last five months also financial services firms are getting into the act with goldman and mastercard making moves to own its own part of the stack themselves to combat things like fraud and identity theft and then finally numerous moves to expand markets octa with zero crowdstrike buying a log management company palo alto picking up devops expertise rapid seven shoring up its kubernetes chops tenable expanding beyond insights and going after identity interesting fortinet filling gaps in a multi-cloud offering sale point extending to governance risk and compliance grc zscaler picked up an israeli firm to fill gaps in access control and then vmware buying mesh 7 to secure modern app development and distribution services so tons and tons of activity here okay so let's look at some of the etr data to put the cyber market in context etr uses the concept of market share it's one of the key metrics which is a measure of pervasiveness in the data set so for each sector it calculates the number of respondents for that sector divided by the total to get a sense for how prominent the sector is within the cio and i.t buyer communities okay this chart shows the full etr sector taxonomy with security highlighted across three survey periods april last year january this year in april this year now you wouldn't expect big moves in market share over time so it's relatively stable by sector but the big takeaway comes from observing which sectors are most prominent so you see that red line that dotted line imposed at the sixty percent level you can see there are only six sectors above that line and cyber security is one of them okay so we know that security is important in a large market but this puts it in the context of the other sectors however we know from previous breaking analysis episodes that despite the importance of cyber and the urgency catalyzed by the pandemic budgets unfortunately are not unlimited and spending is bounded it's not an open checkbook for csos as shown in this chart this is a two-dimensional graphic showing market share in the horizontal axis or pervasiveness and net score in the vertical axis net score is etr's measurement of spending velocity and we've superimposed a red line at 40 percent because anything over 40 percent we consider extremely elevated we've filtered and limited the number of sectors to simplify the graphic and you can see in the sectors that we've highlighted only the big four four are above that forty percent line ai containers rpa and cloud they exceed that sort of forty percent magic water line information security you can see that is highlighted and it's respectable but it competes for budget with other important sectors so this of course creates challenges for organization because not only are they strapped for talent as we've reported they like everyone else in it face ongoing budget pressures research firm cybersecurity ventures estimates that in 2021 6 trillion dollars worldwide will be lost on cyber crime conversely research firm canalis pegs security spending somewhere around 60 billion dollars annually idc has it higher around 100 billion so either way we're talking about spending between one to one point six percent annually of how much the bad guys are taking out that's peanuts really when you consider the consequences so let's double click into the cyber landscape a bit and further look at some of the companies here's that same x y graphic with the company's etr captures from respondents in the cyber security sector that's what's shown on the chart here now the usefulness of the red lines is 20 percent on the horizontal indicates the largest presence in the survey and the magic 40 percent line that we talked about earlier shows those firms with the most elevated momentum only microsoft and palo alto exceed both high water marks of course splunk and cisco are prominent horizontally and there are numerous companies to the left of the 20 percent line and many above that 40 percent high water mark on the vertical axis now in the bottom left quadrant that includes many of the legacy names that have been around for a long time and there are dozens of companies that show spending momentum on their platforms i.e above single digits so that picture is like the first one we showed you very very crowded space but so let's filter it a bit and only include companies in the etr survey that had at least a hundred responses so an n of a hundred or greater so it's a little easy to read but still it's kind of crowded when you think about it okay so same graphic and we've superimposed the data that determined the plot position over in the bottom right there so it's net score and shared n including only companies with more than 100 n so what does this data tell us about the market well microsoft is dominant as always it seems in all dimensions but let's focus on that red line for a moment some of the names that we've highlighted over the past two years show very well here first i want to talk about palo alto networks pre-covet as you might recall we highlighted the valuation divergence between palo alto and fortinet and we said fortinet was executing better on its cloud strategy and palo alto was at the time struggling with the transition especially with its go to market and its sales force compensation and really refreshing its portfolio but we told you that we were bullish on palo alto networks at the time because of its track record and the fact that cios consistently told us that they saw palo alto as a thought leader in the space that they wanted to work with they said that palo alto was the gold standard the best especially larger company cisos so that gave us confidence that palo alto a very well-run company was going to get its act together and perform better and palo alto has just done just that as we expected they've done very well and they've been rapidly moving customers to the next generation of platforms and we're very impressed by the company's execution and the stock has generally reflected that now some other names that hit our radar and the etr data a couple of years ago continue to perform well crowdstrike z-scaler sales sail point and cloudflare a cloudflare just reported and beat earnings but was off the stock fell on headwinds for tech overall the big rotation but the company is doing very well and they're growing rapidly and they have momentum as you can see from the etr data and we put that double star around proof point to highlight that it was worthy of fetching 12 and a half billion dollars from private equity firm so nice exit there supporting the continued control consolidation trend that we've predicted in cyber security now let's turn our attention to octa and auth zero this is where it gets interesting and is a clever play for octa we think and we want to drill into it a bit octa is acquiring auth zero for big money why well we think todd mckinnon octa ceo wants to run the table on identity and then continue to expand his tam he has to do that to justify his lofty valuation so octa's ascendancy around identity and single sign sign-on is notable the fragmented pictures that we've shown you they scream out for simplification and trust and that's what octa brings but it competes with some major players most notably microsoft with active directory so look of course microsoft is going to dominate in its massive customer base but the rest of the market that's like jump ball it's wide open and we think mckinnon saw the opportunity to go dominate that sector now octa comes at this from an enterprise perspective bringing top-down trust to the equation and throwing a big blanket over all the discrete sas platforms and unifying employee access octa's timing was perfect it was founded in 2009 just as the massive sasification trend was happening around crm and hr and service management and cloud etc but the one thing that octa didn't have that auth 0 does is serious developer chops while octa was crushing it with its enterprise sales strategy auth 0 was laser focused on developers and building a bottoms up approach to identity by acquiring auth0 octa can dominate both sides of the barbell and then capture the fat middle so yes it's a pricey acquisition but in our view it's a great move by mckinnon now i don't know mckinnon personally but last week i spoke to arun shrestha who's the ceo of security specialist beyond id they're a platinum services partner of octa and there a zero trust expert he worked for octa for a number of years and shared with me a bit about mckinnon's style and think big approach arun said something that caught my attention he said firewalls used to be the perimeter now people are and while that's self-serving to octa and probably beyond id it's true people apps and data are the new perimeter and they're not in one location and that's the point now unfortunately i had lined up an interview with dia jolly who was the chief product officer at octa in a cube alum for this past week knowing that we were running this segment in this episode but she unfortunately fell ill the day of our interview and had to cancel but i want to follow up with her and understand how she's thinking about connecting the dots with auth 0 with devs and enterprises and really test our thesis there this is a really interesting chess match that's going on let's look a little deeper into that identity space this chart here shows some of the major identity players it has some of the leaders in the identity market and there's a breakdown of etr's net score now net score comprises five elements the lime green is we're adding the platform new the forest green is we're spending six percent or more relative to last year the gray is flat send plus or minus flat spend plus or minus five percent the pinkish is spending less and the bright red is where exiting the platform retiring now you subtract the red from the green and that gets you the result for net score which you can see superimposed on the right hand chart at the bottom that first column there the far column is shared in which informs and indicates the number of responses and is a proxy for presence in the market oh look at the top two players in terms of spending momentum now sales sale point is right there but auth 0 combined with octa's distribution channel will extend octa's lead significantly in our view and then there's microsoft now just a caveat this includes all of microsoft's security offerings not just identity but it's there for context and cyber arc as well includes its acquisition of adaptive but also other parts of cyberarks portfolio so you can see some of the other names that are there many of which you'll find in the gartner magic quadrant for identity and as we said we really like this move by octa it combines positive market forces with lead offerings from very well-run companies that have winning dna and passionate people now to further emphasize emphasize what what's happening here take a look at this this chart shows etr data for octa within sale point and cyber arc accounts out of the 230 cyber and sale point customers in the data set there are 81 octa accounts that's a 35 overlap and the good news for octa is that within that base of sale point in cyber arc accounts octa is shown by the net score line that green line has a very elevated spending and momentum and the kicker is if you read the fine print in the right hand column etr correctly points out that while sailpoint and cyberarc have long been partners with octa at the recent octane 21 event octa's big customer event the company announced that it was expanding into privileged access management pam and identity governance hello and welcome to coopetition in the 2020s now our current thinking is that this bodes very well for octa and cyberark and sailpoint well they're going to have to make some counter moves to fend off the onslaught that is coming now let's wrap up with what has become a tradition in our quarterly security updates looking at those two dimensions of net score and market share we're going to see which companies crack the top 10 for both measures within the etr data set we do this every quarter so here on the left we have the top 20 sorted by net score or spending momentum and on the right we sort by shared n so again top 20 which informs shared end and forms the market share metric or presence in the data set that red horizontal lines those two lines on each separate the top 10 from the remaining 10 within those top 20. in our method what we do is we assign four stars to those companies that crack the top ten for both metrics so again you see microsoft palo alto networks octa crowdstrike and fortinet fortinet by the way didn't make it last quarter they've kind of been in and out and on the bubble but you know this company is very strong and doing quite well only the other four did last quarter there was same four last quarter and we give two stars to those companies that make it in both categories within the top 20 but didn't make the top 10. so cisco splunk which has been steadily decelerating from a spending momentum standpoint and z-scaler which is just on the cusp you know we really like z-scaler and the company has great momentum but that's the methodology it is what it is now you can see we kept carbon black on the rightmost chart it's like kind of cut off it's number 21 only because they're just outside looking in on netscore you see them there they're just below on on netscore number 11. and vmware's presence in the market we think that carbon black is really worth paying attention to okay so we're going to close with some summary and final thoughts last quarter we did a deeper dive on the solar winds hack and we think the ramifications are significant it has set the stage for a new era of escalation and adversary sophistication now major change we see is a heightened awareness that when you find intruders you'd better think very carefully about your next moves when someone breaks into your house if the dog barks or if you come down with a baseball bat or other weapon you might think the intruder is going to flee but if the criminal badly wants what you have in your house and it's valuable enough you might find yourself in a bloody knife fight or worse what's happening is intruders come to your company via island hopping or inside or subterfuge or whatever method and they'll live off the land stealthily using your own tools against you so they can you can't find them so easily so instead of injecting new tools in that send off an alert they just use what you already have there that's what's called living off the land they'll steal sensitive data for example positive covid test results when that was really really sensitive obviously still is or other medical data and when you retaliate they will double extort you they'll encrypt your data and hold it for ransom and at the same time threaten to release the sensitive information to crushing your brand in the process so your response must be as stealthy as their intrusion as you marshal your resources and devise an attack plan you face serious headwinds not only is this a complicated situation there's your ongoing and acute talent shortage that you tell us about all the time many companies are mired in technical debt that's an additional challenge and then you've got to balance the running of the business while actually affecting a digital transformation that's very very difficult and it's risky because the more digital you become the more exposed you are so this idea of zero trust people used to call it a buzzword it's now a mandate along with automation because you just can't throw labor at the problem this is all good news for investors as cyber remains a market that's ripe for valuation increases and m a activity especially if you know where to look hopefully we've helped you squint through the maze a little bit okay that's it for now thanks to the community for your comments and insights remember i publish each week on wikibon.com and siliconangle.com these episodes they're all available as podcasts all you do is search breaking analysis podcast put in the headphones listen when you're in your car out for your walk or run and you can always connect on twitter at divalante or email me at david.valante at siliconangle.com i appreciate the comments on linkedin and in clubhouse please follow me so you're notified when we start a room and riff on these topics and others and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time [Music] you

Published Date : May 8 2021

SUMMARY :

and on the bubble but you know this

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2009DATE

0.99+

20 percentQUANTITY

0.99+

six percentQUANTITY

0.99+

microsoftORGANIZATION

0.99+

57QUANTITY

0.99+

2021DATE

0.99+

40 percentQUANTITY

0.99+

palo altoORGANIZATION

0.99+

five elementsQUANTITY

0.99+

81QUANTITY

0.99+

fortinetORGANIZATION

0.99+

tom kellermanPERSON

0.99+

palo altoORGANIZATION

0.99+

75 percentQUANTITY

0.99+

6.5 billionQUANTITY

0.99+

australiaLOCATION

0.99+

ciscoORGANIZATION

0.99+

730 million dollarsQUANTITY

0.99+

sixty percentQUANTITY

0.99+

dia jollyPERSON

0.99+

franceLOCATION

0.99+

more than 20 billion dollarsQUANTITY

0.99+

12 and a half billion dollarsQUANTITY

0.99+

last yearDATE

0.99+

april last yearDATE

0.99+

april this yearDATE

0.99+

6 trillion dollarsQUANTITY

0.99+

octaORGANIZATION

0.99+

two starsQUANTITY

0.99+

bostonLOCATION

0.99+

g2 patelORGANIZATION

0.99+

2020sDATE

0.99+

siliconangle.comOTHER

0.99+

forty percentQUANTITY

0.99+

more than 40 percentQUANTITY

0.99+

five monthQUANTITY

0.99+

vmwareORGANIZATION

0.99+

first columnQUANTITY

0.99+

arun shresthaPERSON

0.99+

last weekDATE

0.99+

dozens of companiesQUANTITY

0.98+

both categoriesQUANTITY

0.98+

both measuresQUANTITY

0.98+

both metricsQUANTITY

0.98+

oneQUANTITY

0.98+

pandemicEVENT

0.98+

each weekQUANTITY

0.98+

two dimensionsQUANTITY

0.98+

last quarterDATE

0.98+

five acquired companiesQUANTITY

0.98+

12 plus billion dollarQUANTITY

0.98+

six sectorsQUANTITY

0.98+

canadaLOCATION

0.98+

wiproORGANIZATION

0.97+

january this yearDATE

0.97+

last quarterDATE

0.97+

10QUANTITY

0.97+

first oneQUANTITY

0.97+

netherlandsLOCATION

0.96+

accenture atosORGANIZATION

0.96+

more than 100 nQUANTITY

0.96+

dave vellantePERSON

0.96+

each sectorQUANTITY

0.96+

arunPERSON

0.96+

two linesQUANTITY

0.96+

fourthQUANTITY

0.96+

imi mobileORGANIZATION

0.95+

Breaking Analysis: Satya Nadella Lays out a Vision for Microsoft at Ignite 2021


 

>> From theCUBE Studios in Palo Alto, and Boston bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Microsoft CEO, Satya Nadella sees a different future for cloud computing over the coming decade. And as Microsoft Ignite keynote, he laid out the five attributes that will define the cloud in the next 10 years. His vision is a cloud platform that is decentralized, ubiquitous, intelligent, sensing, and trusted. One that actually tickles the senses and levels the playing field between consumers and creators by placing tools in the hands of more people around the world. Welcome to this week's wiki buns cube insights, powered by ETR. In this Breaking Analysis we'll review the highlights of Nadella's Ignite keynote share our thoughts on what it means for the future of cloud specifically, and the tech industry generally. We'll also give you a more tactical view of Microsoft and compare its performance within the ETR's dataset to its peers. Satya Nadella's forward-looking cloud attributes comprised five key vectors that he talked about. The first was ubiquitous and decentralized computing, Nadella made the statement that we've reached peak centralization today that we're witnessing radical changes in computing architecture from the materials used to semiconductors software, and that is going to serve a new frontier that's forming at the edge. Nadella envisions a world where there will be more sovereignty and decentralized control. We couldn't agree more. The cloud universe is expanding and the lines are blurring between what's being done on-prem, across public clouds and the cloud experience which is going to extend everywhere, including the edge. And of course, data is going to be flowing through this hyper decentralized system. Next was sovereign data and ambient intelligence. To us data sovereignty means that whatever the local laws are the system is going to have the intelligence to govern privacy, ensure data provenance, and adhere to corporate edicts. Ambient intelligence is a field of research that leverages pervasive sensor networks and AI to respond to and anticipate humans and machines. Nadella sees the future where a business logic will move from being code that is written to code that is actually learned from data, pretty interesting. He sees this autodidactic system if you will, as fundamental to tackling big problems like personalized medicine or even climate change. Third, he talked about empowered creators and communities everywhere. Nadella said, there'll be increasingly a balance between consumption and creation. His talking about an economic balance essentially he's predicting that creation will be democratized and his vision is to put tools in the hands of people to allow them to tip the scales toward knowledge workers, frontline employees, students, everyone, essentially creating content, applications, code, et cetera power to the people if you will. And underneath this vision is a new form of or emerging new forms of Silicon operating systems and entirely transformative digital experiences. Next was economic opportunity for the global workforce. So picking up on the accelerated themes of remote work that were catalyzed by COVID, Nadella emphasize that the future has to accommodate flexibility in how, when and where people work. He sees a new model of productivity emerging, not necessarily defined by corporate revenue per employee for example, but by the economic advantages that become accessible to everyone through better access to technology, collaboration tools, education, and healthy lifestyles, all enabled by this ubiquitous cloud. Finally, trust by design, Nadella said that ethical principles must govern the design, development and deployment of AI. The system he said must be secure by design with zero trust built in to protect business assets and personal privacy. So this was a big vision that Nadella put forth it, connects the dots between bits and atoms and sets up Microsoft to extend its reach well beyond office productivity tools and cloud infrastructure. He cited the Microsoft cloud as the underpinning of its future and specifically called out Teams, he mentioned 365, HoloLens 2 and the announcement of Microsoft Mesh, a new mixed reality platform. Nadella said Mesh will do for virtual reality what X-Box live did for gaming. Take the experience from single person to multi-person imagine holographic images with no screens, empowering advances in medicine, science, technology, and very importantly social interactions. Now, one of the things that we took away from his talk was this notion of Microsoft as a technology arm's dealer. No, we're not, Nadella avoided slamming the competition directly by name one statement that he made, stood out. He said, " No customer wants to be dependent on a provider that sells them technology on one end and competes with them on the other" And to us this was a direct shot at Amazon, Google and Apple. How so you ask? And what does it tell us? In his book "Seeing Digital" author David Moschella said, "that Silicon Valley broadly defined as a duel disruption agenda." What does that mean? Not only are large tech companies disrupting horizontal layers of the tech stack like compute, storage, networking, database, security, applications, and so forth. But they're also disrupting industries Amazon and media, grocery, logistics, for example. Google and Amazon on healthcare, Google and Apple on automobiles, all three in FinTech. And it's likely this is just the beginning but Nadella's posture suggests that Microsoft for now anyway, is content being mostly a horizontal technology provider, aka arms dealer. Now, there are some examples where you could argue that Microsoft sort of crosses the line maybe as a games developer or as a SAS competitor. Do you really want to, if you're a SAS player do you want to run your system on Azure and compete with Microsoft? Well, it depends if you're vertically oriented or maybe horizontal in their swim lanes, but anyway, these are more natural cohorts to technology than say for example, Amazon's retail business. So I thought that was something that was worth taking a look at. All right, let's take a quick look at how Microsoft compares to a couple of the great tech giants of the past several decades. Here's a financial snapshot of Microsoft compared to Oracle a highly profitable software company and IBM an industry legend. The first two things that jumped right out of Microsoft, size and it's growth rate. Microsoft is twice the revenue of IBM and nearly four extent of Oracle. And yet Microsoft is growing in the mid-teens compared to low single digits for Oracle and IBM continues to shrink so extensible you can grow. Microsoft's gross margin model has been pulled down by its hardware business but its operating margins are unbelievable. Meanwhile, the cash on its balance sheet is immense much larger than Oracles, which is very impressive. It's certainly dwarfs that of IBM, a company that had to take on a lot of debt to acquire Red Hat and has a balance sheet, that increasingly looks more like Dell's than it's historical self. And then on the last two rows Oracle and IBM, both owners of their own cloud have been lapped by Microsoft in terms of CapEx and research & development investment. Ironically, as we pointed out, IBM's R & D spend in 2007 the year after AWS launched the modern era of cloud was comparable to that of Microsoft. Let's now pivot it to some of the ETR survey data and see how Microsoft fares. We'll start by sharing a fundamental basis of the ETR methodology, that is the calculation of net score. Net score is a measure of spending momentum and here's how it's derived. This chart shows the components of Microsoft's net score. It comprises five parts and represents the percentage of customers within the ETR survey with specific spending profiles. The lime green is new adoptions, the forest green is increased spend of 6% or more for 2021 relative to 2020, the gray is flat spend, the pinkish slice is spend declining by more than 6% or 6% or more relative to last year and the bright red is replacing the platform. You subtract the reds from the greens and you get net score. As you can see, Microsoft's net score is 53% which is very high for $150 billion Company. Now let's put that in context and expand the scope here a little bit. This chart shows how Microsoft fares relative to its peers, the vertical axis shows net score against spending velocity and the horizontal axis shows market share. Market share measures pervasiveness in the survey. In the table insert, you can see the vendors they're sorted by net score and the shared end column is there as well, which represents the number of shared accounts in the dataset. On both accounts bigger is better. Now note the red dotted line, that's the 40% watermark which is my personal indicator of an elevated net score anything above that in our view is really solid. Microsoft is as usual off the charts strong well to the right with it's market presence and then an overall net score of 53% as we showed earlier. And then there's Azure, separate from Microsoft overall. We wanted to plot that specifically which of course it doesn't have the presence of Microsoft overall, no surprise, but it's still prominent on the x-axis and it has a net score approaching 70%, which is quite amazing. AWS not surprisingly is highly elevated with a presence that's even larger than Azure. And you can see Zoom, Salesforce and Google Cloud all above the 40% line. Google as we've reported is well off the pace in the horizontal axis and even though its net score is elevated, we would like to see it even higher, given its smaller size relative to AWS and Azure. You know, SAP always stands out because it's a large company and it's got a net score that's hovering just under 30%. It's not above that 40% line, but it's solid. And you can see IBM and Oracle now we're showing here IBM and Oracle overall so it's the whole kitchen sink comparable to Microsoft that turquoise dot, if you will. So you can see why those two are valued much lower Microsoft. The large base of its business that's declining is much, much larger than the pieces of their business that are growing. Now Oracle has some momentum, the Back Aaron's article on February 19th, which declared Oracle a cloud giant and it declared its stock a buy combined with some earnings upgrades including one today from Ramo Lyncho of Barclays has catapulted the stock to all time highs and a valuation over $200 billion. IBM is a different story as we've discussed frequently Arvind has a lot of work to do to get this national treasure back to what's prominent itself. Okay, let now unpack Microsoft's vast portfolio a bit and see where it's doing well and where it's making moves and maybe where it's struggling, some. This graphic shows Microsoft's net score across its entire product portfolio within the ETR taxonomy. And you can see it's pretty much killing it across the board. Microsoft plays in almost every sector in the ETR taxonomy and you can see the 40% red line and how many of its offerings are above that line. The yellow bar being the most recent survey and while there's quite a bit of gray, i.e. flat spend relative to 2020, we're talking about some very tough compares from last year. And yet there's still a huge chunk of the portfolio in the green meaning spending momentum is actually up from last year and some of Microsoft's most important sectors like Cloud and Teams and Analytics. Look only Skype and Microsoft Dynamics are lagging, so really nice story there in our view. Now let's come back and take a look at Microsoft's cloud business specifically as compared to its peers. So Satya basically said that Microsoft's future will build on top of its cloud and looking at this picture it's pretty encouraging for the company. This chart, again, shows net score or spending momentum inside specifically Fortune 500 customers and it's a key bellwether in the ETR dataset, and you can see Azure and Azure functions well above the 40% red line and extremely well positioned relative to AWS and GCP. Importantly, the yellow bar tells us that compared to previous surveys Microsoft's cloud business is actually gaining momentum in this very important sector. Now, other notable call-outs on this chart VMware Cloud, which, it's on-prem hybrid cloud and VMware Cloud on AWS, which is reportedly doing well but off from the momentum of its highs last spring. You can see Oracle jumped up indicating cloud momentum, but still well below the performance of the largest cloud players. The IBM Cloud appears to be a non-factor in the survey and as we previously stated, we'd like to see IBM recalibrate the financials for its cloud business and come up with a reporting framework that better represents the prevailing mental model of cloud computing. We think a cleaner number would allow IBM to build on the Red Hat momentum. I'm not sure what to make of the HPE boost, it looks significant, but in digging into the data it's only 17 data points, but look 17 within the Fortune 500 companies is not terrible. And HPE net score in that sector is more than double its overall cloud net score so that's positive we think. Okay, let's wrap by looking at how customers are thinking about multi-cloud adoption and really this data that we're about to show you simply asking customers about clouds they're using versus any type of long-term vision. So it's a good representation of what's happening today and what CIO is are thinking about in the near future particularly over the next 12 months. The survey asks customers to describe their cloud provider usage and strategy. You can see that only 14% of the survey respondents have exclusively a mono-cloud strategy, but now add in another 22% who were predominantly single cloud and you now have more than a third of the customer base gravitating toward mono-cloud. Another 14% say they're concentrating cloud providers more narrowly. Now on the flip side, you've got a big group, 29% that are moving toward multi-cloud and if you add in the additional 16% who say they are and will continue to be evenly spread, 45% of the survey is solidly headed in that direction so it's a mixed picture. What's the takeaway? Well, we think Andy Jassy is right when he says that while many customers use more than one cloud, they tend to have a primary provider and have something like a 70,30 or even 80,20 split between primary and secondary clouds. Now we think, however that this will change, but only to the extent that the vendor community is adding value on top of the existing hyperscale clouds. What we're saying and have been saying is that there is a real opportunity to create value on top of the cloud infrastructure that's being built out by AWS, Google and Microsoft. Instead of fearing cloud, the vendor community should be embracing it creating a layer on top, abstracting away the underlying complexities associated with cloud native, exploiting cloud native, and then building on top of that. Snowflake's data cloud vision is right on in my view, we can envision virtually every layer of the stack following suit. Even within database there are opportunities to identify more granular segments across clouds. For example, despite Snowflakes early multi-cloud lead you're seeing competitive firms like Teradata begin to architect a system across clouds that can query data warehouses from distributed locations, including on-prem as part of what they refer to as a data fabric, sounds kind of like Snowflakes global data mesh, or maybe better Zhamak Dehghani's data mesh. Yeah, sure but Teradata has capabilities that Snowflake doesn't for example, the ability to do complex joins and we can see plenty of market for both companies to differentiate. And why shouldn't similar vision extend from on-prem, across clouds to the edge for data protection, security, governance, hybrid compute ,analytics, federated applications, its a huge market that the hyperscale providers are likely too busy worrying about their own walled gardens to start building across on top of their competitors clouds. So Dell, HPE, VMware, Cisco, Palo Alto Fortunate, Zscaler or Cohesity, Veeam and hundreds of other tech companies, including by the way IBM and Oracle should be saying thank you to AWS, Google and Microsoft for spending all that money to build out great infrastructure on which they can build value, tap for future growth. And many of you will say, Hey, we're already doing this. Okay, I'll be watching to see the ratio of real versus slideware because generally today, in my opinion the denominator is much larger than the numerator. So when that ratio hits 1X we'll know it started to become real. Okay, that's it for today remember, all these episodes are available as podcasts wherever you listen so please subscribe. I publish weekly on wikibun.com and siliconangle.com. Please comment on my LinkedIn post or you can tweet me @DVellante or feel free to email me at David.Vellante@siliconangle.com. And don't forget to check out etr.plus for all the survey and data science action. This is Dave Vellante for the Cube Insights powered by ETR. Be well, thanks for watching and we'll see you next time. (relaxing music)

Published Date : Mar 8 2021

SUMMARY :

bringing you data-driven and the cloud experience which is going

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NadellaPERSON

0.99+

IBMORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

David MoschellaPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AppleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

February 19thDATE

0.99+

DellORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Andy JassyPERSON

0.99+

2007DATE

0.99+

$150 billionQUANTITY

0.99+

SkypeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

BarclaysORGANIZATION

0.99+

6%QUANTITY

0.99+

2021DATE

0.99+

TeradataORGANIZATION

0.99+

2020DATE

0.99+

last yearDATE

0.99+

VMwareORGANIZATION

0.99+

Satya NadellaPERSON

0.99+

Satya NadellaPERSON

0.99+

40%QUANTITY

0.99+

53%QUANTITY

0.99+

45%QUANTITY

0.99+

22%QUANTITY

0.99+

80,20QUANTITY

0.99+

ON DEMAND API GATEWAYS INGRESS SERVICE MESH


 

>> Thank you, everyone for joining. I'm here today to talk about ingress controllers, API gateways, and service mesh on Kubernetes, three very hot topics that are also frequently confusing. So I'm Richard Li, founder/CEO of Ambassador Labs, formerly known as Datawire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including Telepresence and Ambassador, which is a Kubernetes native API gateway. And most of what I'm going to talk about today is related to our work around Ambassador. So I want to start by talking about application architecture and workflow on Kubernetes and how applications that are being built on Kubernetes really differ from how they used to be built. So when you're building applications on Kubernetes, the traditional architecture is the very famous monolith. And the monolith is a central piece of software. It's one giant thing that you build deploy, run. And the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly is that architecture is really reflected in that workflow. So with a monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular piece of software and everyone works towards that release train. And you have specialized teams. You have a development team, which has all your developers. You have a QA team, you have a release team, you have an operations team. So that's your typical development organization and workflow with a monolithic application. As organizations shift to microservices, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of this application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship as soon as they're ready. And so we call this full cycle development because that team is really responsible not just for the coding of that microservice, but also the testing and the release and operations of that service. So this is a huge change, particularly with workflow, and there's a lot of implications for this. So I have a diagram here that just tries to visualize a little bit more the difference in organization. With the monolith, you have everyone who works on this monolith. With microservices, you have the yellow folks work on the yellow microservice and the purple folks work on the purple microservice and maybe just one person work on the orange microservice and so forth. So there's a lot more diversity around your teams and your microservices, and it lets you really adjust the granularity of your development to your specific business needs. So how do users actually access your microservices? Well, with a monolith, it's pretty straightforward. You have one big thing, so you just tell the internet, well, I have this one big thing on the internet. Make sure you send all your traffic to the big thing. But when you have microservices and you have a bunch of different microservices, how do users actually access these microservices? So the solution is an API gateway. So the API gateway consolidates all access to your microservices. So requests come from the internet. They go to your API gateway. The API gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate microservice. And because the API gateway is centralizing access to all of the microservices, it also really helps you simplify authentication, observability, routing, all these different cross-cutting concerns, because instead of implementing authentication in each of your microservices, which would be a maintenance nightmare and a security nightmare, you've put all of your authentication in your API gateway. So if you look at this world of microservices, API gateways are a really important part of your infrastructure which are really necessary, and pre-microservices, pre-Kubernetes, an API gateway, while valuable, was much more optional. So that's one of the really big things around recognizing with the microservices architecture, you really need to start thinking much more about an API gateway. The other consideration with an API gateway is around your management workflow, because as I mentioned, each team is actually responsible for their own microservice, which also means each team needs to be able to independently manage the gateway. So Team A working on that microservice needs to be able to tell the API gateway, this is how I want you to route requests to my microservice, and the purple team needs to be able to say something different for how purple requests get routed to the purple microservice. So that's also a really important consideration as you think about API gateways and how it fits in your architecture, because it's not just about your architecture, it's also about your workflow. So let me talk about API gateways on Kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the internet to services inside the cluster. Kubernetes, from an architectural perspective, it actually has a requirement that all the different pods in a Kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own IP address. So this makes things very, very simple for interpod communication. Kubernetes, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you route it around the cluster, and it's very opinionated about how this works, but getting traffic into the cluster, there's a lot of different options and there's multiple strategies. There's Pod IP, there's Ingress, there's LoadBalancer resources, there's NodePort. I'm not going to go into exhaustive detail on all these different options, and I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be a hardware load balancer. It can be a virtual machine. It can be a cloud load balancer. But the key requirement for an external load balancer is to be able to attach a stable IP address so that you can actually map a domain name and DNS to that particular external load balancer, and that external load balancer usually, but not always, will then route traffic and pass that traffic straight through to your ingress controller. And then your ingress controller takes that traffic and then routes it internally inside Kubernetes to the various pods that are running your microservices. There are other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really require each of your microservices to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about an ingress controller. What exactly is an ingress controller? So an ingress controller is an application that can process rules according to the Kubernetes ingress specification. Strangely, Kubernetes is not actually shipped with a built-in ingress controller. I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement, and it is. It turns out that this is complex enough that there's no one size fits all ingress controller. And so there is a set of ingress rules that are part of the Kubernetes ingress specification that specify how traffic gets routed into the cluster, and then you need a proxy that can actually route this traffic to these different pods. And so an ingress controller really translates between the Kubernetes configuration and the proxy configuration, and common proxies for ingress controllers include HAProxy, Envoy Proxy, or NGINX. So let me talk a little bit more about these common proxies. So all these proxies, and there are many other proxies. I'm just highlighting what I consider to be probably the three most well-established proxies, HAProxy, NGINX, and Envoy Proxy. So HAProxy is managed by HAProxy Technologies. Started in 2001. The HAProxy organization actually creates an ingress controller. And before they created an ingress controller, there was an open source project called Voyager which built an ingress controller on HAProxy. NGINX, managed by NGINX, Inc., subsequently acquired by F5. Also open source. Started a little bit later, the proxy, in 2004. And there's the Nginx-ingress, which is a community project. That's the most popular. As well as the Nginx, Inc. kubernetes-ingress project, which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the NGINX ingress controller, and it's not clear if they're using this commercially supported version or this open source version. And they actually, although they have very similar names, they actually have different functionality. Finally, Envoy Proxy, the newest entrant to the proxy market, originally developed by engineers at Lyft, the ride sharing company. They subsequently donated it to the Cloud Native Computing Foundation. Envoy has become probably the most popular cloud native proxy. It's used by Ambassador, the API gateway. It's used in the Istio service mesh. It's used in the VMware Contour. It's been used by Amazon in App Mesh. It's probably the most common proxy in the cloud native world. So as I mentioned, there's a lot of different options for ingress controllers. The most common is the NGINX ingress controller, not the one maintained by NGINX, Inc., but the one that's part of the Kubernetes project. Ambassador is the most popular Envoy-based option. Another common option is the Istio Gateway, which is directly integrated with the Istio mesh, and that's actually part of Docker Enterprise. So with all these choices around ingress controller, how do you actually decide? Well, the reality is the ingress specification's very limited. And the reason for this is that getting traffic into a cluster, there's a lot of nuance into how you want to do that, and it turns out it's very challenging to create a generic one size fits all specification because of the vast diversity of implementations and choices that are available to end users. And so you don't see ingress specifying anything around resilience. So if you want to specify a timeout or rate-limiting, it's not possible. Ingress is really limited to support for HTTP. So if you're using gRPC or web sockets, you can't use the ingress specification. Different ways of routing, authentication. The list goes on and on. And so what happens is that different ingress controllers extend the core ingress specification to support these use cases in different ways. So NGINX ingress, they actually use a combination of config maps and the ingress resources plus custom annotations that extend the ingress to really let you configure a lot of the additional extensions that is exposed in the NGINX ingress. With Ambassador, we actually use custom resource definitions, different CRDs that extend Kubernetes itself to configure Ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by Kubernetes. So when you do a kub control apply of an Ambassador CRD, kub control can immediately validate and tell you if you're actually applying a valid schema and format for your Ambassador configuration. And as I previously mentioned, Ambassador's built on Envoy Proxy, Istio Gateway also uses CRDs. They can be used in extension of the service mesh CRDs as opposed to dedicated gateway CRDs. And again, Istio Gateway is built on Envoy Proxy. So I've been talking a lot about ingress controllers, but the title of my talk was really about API gateways and ingress controllers and service mesh. So what's the difference between an ingress controller and an API gateway? So to recap, an ingress controller processes Kubernetes ingress routing rules. An API gateway is a central point for managing all your traffic to Kubernetes services. It typically has additional functionality such as authentication, observability, a developer portal, and so forth. So what you find is that not all API gateways are ingress controllers because some API gateways don't support Kubernetes at all. So you can't, they can't be ingress controllers. And not all ingress controllers support the functionality such as authentication, observability, developer portal, that you would typically associate with an API gateway. So generally speaking, API gateways that run on Kubernetes should be considered a superset of an ingress controller. But if the API gateway doesn't run on Kubernetes, then it's an API gateway and not an ingress controller. So what's the difference between a service mesh and an API gateway? So an API gateway is really focused on traffic into and out of a cluster. So the colloquial term for this is North/South traffic. A service mesh is focused on traffic between services in a cluster, East/West traffic. All service meshes need an API gateway. So Istio includes a basic ingress or API gateway called the Istio Gateway, because a service mesh needs traffic from the internet to be routed into the mesh before it can actually do anything. Envoy Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Docker Enterprise provides an Envoy-based solution out of the box, Istio Gateway. The reason Docker does this is because, as I mentioned, Kubernetes doesn't come package with an ingress. It makes sense for Docker Enterprise to provide something that's easy to get going, no extra steps required, because with Docker enterprise, you can deploy it and get going, get it exposed on the internet without any additional software. Docker Enterprise can also be easily upgraded to Ambassador because they're both built on Envoy. It ensures consistent routing semantics. And also with Ambassador, you get greater security for single sign-on. There's a lot of security by default that's configured directly into Ambassador. Better control over TLS, things like that. And then finally, there's commercial support that's actually available for Ambassador. Istio is an open source project that has a very broad community, but no commercial support options. So to recap, ingress controllers and API gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. And I think a lot of times organizations don't think critically enough about the API gateway until they're much further down the Kubernetes journey. Considerations around how to choose that API gateway include functionality such as how does it do with traffic management and observability? Does it support the protocols that you need? Also nonfunctional requirements such as does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this? An API gateway is focused on North/South traffic, so traffic into and out of your Kubernetes cluster. A service mesh is focused on East/West traffic, so traffic between different services inside the same cluster. Docker Enterprise includes Istio Gateway out of the box. Easy to use, but can also be extended with Ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between API gateways, ingress controllers, and service meshes, and how you should be thinking about that on your Kubernetes deployment.

Published Date : Sep 14 2020

SUMMARY :

So ingress is the process

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2004DATE

0.99+

Richard LiPERSON

0.99+

2001DATE

0.99+

Ambassador LabsORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

each teamQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

each teamQUANTITY

0.99+

DatawireORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

each podQUANTITY

0.99+

LyftORGANIZATION

0.99+

Nginx, Inc.ORGANIZATION

0.99+

todayDATE

0.98+

eachQUANTITY

0.98+

KubernetesTITLE

0.98+

one personQUANTITY

0.98+

HAProxy TechnologiesORGANIZATION

0.98+

HAProxyTITLE

0.97+

Docker EnterpriseTITLE

0.96+

AmbassadorORGANIZATION

0.96+

bothQUANTITY

0.96+

NGINXTITLE

0.96+

NGINX, Inc.ORGANIZATION

0.96+

Docker EnterpriseTITLE

0.96+

Envoy ProxyTITLE

0.96+

oneQUANTITY

0.95+

one big thingQUANTITY

0.95+

NGINX ingressTITLE

0.95+

Docker enterpriseTITLE

0.94+

one particular vehicleQUANTITY

0.93+

ingressORGANIZATION

0.91+

TelepresenceORGANIZATION

0.87+

F5ORGANIZATION

0.87+

EnvoyTITLE

0.86+

Nginx-ingressTITLE

0.85+

three very hot topicsQUANTITY

0.82+

both meshQUANTITY

0.82+

three most well-established proxiesQUANTITY

0.76+

single signQUANTITY

0.75+

Istio GatewayOTHER

0.75+

one giant thingQUANTITY

0.73+

VMware ContourTITLE

0.71+

IngressORGANIZATION

0.7+

Docker EnterpriseORGANIZATION

0.69+

AmbassadorTITLE

0.67+

VoyagerTITLE

0.67+

EnvoyORGANIZATION

0.65+

Istio GatewayTITLE

0.65+

IstioORGANIZATION

0.62+

API Gateways Ingress Service Mesh | Mirantis Launchpad 2020


 

>>thank you everyone for joining. I'm here today to talk about English controllers. AP Gateways and service mention communities three very hot topics that are also frequently confusing. So I'm Richard Lee, founder CEO of Ambassador Labs, formerly known as Data Wire. We sponsor a number of popular open source projects that are part of the Cloud Native Computing Foundation, including telepresence and Ambassador, which is a kubernetes native AP gateway. And most of what I'm going to talk about today is related to our work around ambassador. Uh huh. So I want to start by talking about application architecture, er and workflow on kubernetes and how applications that are being built on kubernetes really differ from how they used to be built. So when you're building applications on kubernetes, the traditional architectures is the very famous monolith, and the monolith is a central piece of software. It's one giant thing that you build, deployed run, and the value of a monolith is it's really simple. And if you think about the monolithic development process, more importantly, is the architecture er is really reflecting that workflow. So with the monolith, you have a very centralized development process. You tend not to release too frequently because you have all these different development teams that are working on different features, and then you decide in advance when you're going to release that particular pieces offering. Everyone works towards that release train, and you have specialized teams. You have a development team which has all your developers. You have a Q A team. You have a release team, you have an operations team, so that's your typical development organization and workflow with a monolithic application. As organization shift to micro >>services, they adopt a very different development paradigm. It's a decentralized development paradigm where you have lots of different independent teams that are simultaneously working on different parts of the application, and those application components are really shipped as independent services. And so you really have a continuous release cycle because instead of synchronizing all your teams around one particular vehicle, you have so many different release vehicles that each team is able to ship a soon as they're ready. And so we call this full cycle development because that team is >>really responsible, not just for the coding of that micro service, but also the testing and the release and operations of that service. Um, >>so this is a huge change, particularly with workflow. And there's a lot of implications for this, s o. I have a diagram here that just try to visualize a little bit more the difference in organization >>with the monolith. You have everyone who works on this monolith with micro services. You have the yellow folks work on the Yellow Micro Service, and the purple folks work on the Purple Micro Service and maybe just one person work on the Orange Micro Service and so forth. >>So there's a lot more diversity around your teams and your micro services, and it lets you really adjust the granularity of your development to your specific business need. So how do users actually access your micro services? Well, with the monolith, it's pretty straightforward. You have one big thing. So you just tell the Internet while I have this one big thing on the Internet, make sure you send all your travel to the big thing. But when you have micro services and you have a bunch of different micro services, how do users actually access these micro services? So the solution is an AP gateway, so the gateway consolidates all access to your micro services, so requests come from the Internet. They go to your AP gateway. The AP Gateway looks at these requests, and based on the nature of these requests, it routes them to the appropriate micro service. And because the AP gateway is centralizing thing access to all the micro services, it also really helps you simplify authentication, observe ability, routing all these different crosscutting concerns. Because instead of implementing authentication in each >>of your micro services, which would be a maintenance nightmare and a security nightmare, you put all your authentication in your AP gateway. So if you look at this world of micro services, AP gateways are really important part of your infrastructure, which are really necessary and pre micro services. Pre kubernetes Unhappy Gateway Well valuable was much more optional. So that's one of the really big things around. Recognizing with the micro services architecture er, you >>really need to start thinking much more about maybe a gateway. The other consideration within a P A gateway is around your management workflow because, as I mentioned, each team is actually response for their own micro service, which also means each team needs to be able to independently manage the gateway. So Team A working on that micro service needs to be able to tell the AP at Gateway. This this is >>how I want you to write. Request to my micro service, and the Purple team needs to be able to say something different for how purple requests get right into the Purple Micro Service. So that's also really important consideration as you think about AP gateways and how it fits in your architecture. Because it's not just about your architecture. It's also about your workflow. So let me talk about a PR gateways on kubernetes. I'm going to start by talking about ingress. So ingress is the process of getting traffic from the Internet to services inside the cluster kubernetes. From an architectural perspective, it actually has a requirement that all the different pods in a kubernetes cluster needs to communicate with each other. And as a consequence, what Kubernetes does is it creates its own private network space for all these pods, and each pod gets its own I p address. So this makes things very, very simple for inter pod communication. Cooper in any is, on the other hand, does not say very much around how traffic should actually get into the cluster. So there's a lot of detail around how traffic actually, once it's in the cluster, how you routed around the cluster and it's very opinionated about how this works but getting traffic into the cluster. There's a lot of different options on there's multiple strategies pot i p. There's ingress. There's low bounce of resource is there's no port. >>I'm not gonna go into exhaustive detail on all these different options on. I'm going to just talk about the most common approach that most organizations take today. So the most common strategy for routing is coupling an external load balancer with an ingress controller. And so an external load balancer can be >>ah, Harvard load balancer. It could be a virtual machine. It could be a cloud load balancer. But the key requirement for an external load balancer >>is to be able to attack to stable I people he address so that you can actually map a domain name and DNS to that particular external load balancer and that external load balancer, usually but not always well, then route traffic and pass that traffic straight through to your ingress controller, and then your English controller takes that traffic and then routes it internally inside >>kubernetes to the various pods that are running your micro services. There are >>other approaches, but this is the most common approach. And the reason for this is that the alternative approaches really required each of your micro services to be exposed outside of the cluster, which causes a lot of challenges around management and deployment and maintenance that you generally want to avoid. So I've been talking about in English controller. What exactly is an English controller? So in English controller is an application that can process rules according to the kubernetes English specifications. Strangely, Kubernetes is not actually ship with a built in English controller. Um, I say strangely because you think, well, getting traffic into a cluster is probably a pretty common requirement. And it is. It turns out that this is complex enough that there's no one size fits all English controller. And so there is a set of ingress >>rules that are part of the kubernetes English specifications at specified how traffic gets route into the cluster >>and then you need a proxy that can actually route this traffic to these different pods. And so an increase controller really translates between the kubernetes configuration and the >>proxy configuration and common proxies for ingress. Controllers include H a proxy envoy Proxy or Engine X. So >>let me talk a little bit more about these common proxies. So all these proxies and there >>are many other proxies I'm just highlighting what I consider to be probably the most three most well established proxies. Uh, h a proxy, uh, Engine X and envoy proxies. So H a proxy is managed by a plastic technology start in 2000 and one, um, the H a proxy organization actually creates an ingress controller. And before they kept created ingress controller, there was an open source project called Voyager, which built in ingress Controller on >>H a proxy engine X managed by engine. Xing, subsequently acquired by F five Also open source started a little bit later. The proxy in 2004. And there's the engine Xing breast, which is a community project. Um, that's the most popular a zwelling the engine Next Inc Kubernetes English project which is maintained by the company. This is a common source of confusion because sometimes people will think that they're using the ingress engine X ingress controller, and it's not clear if they're using this commercially supported version or the open source version, and they actually, although they have very similar names, uh, they actually have different functionality. Finally. Envoy Proxy, the newest entrant to the proxy market originally developed by engineers that lift the ride sharing company. They subsequently donated it to the cloud. Native Computing Foundation Envoy has become probably the most popular cloud native proxy. It's used by Ambassador uh, the A P a. Gateway. It's using the SDO service mash. It's using VM Ware Contour. It's been used by Amazon and at mesh. It's probably the most common proxy in the cloud native world. So, as I mentioned, there's a lot of different options for ingress. Controller is the most common. Is the engine X ingress controller, not the one maintained by Engine X Inc but the one that's part of the Cooper Nannies project? Um, ambassador is the most popular envoy based option. Another common option is the SDO Gateway, which is directly integrated with the SDO mesh, and that's >>actually part of Dr Enterprise. So with all these choices around English controller. How do you actually decide? Well, the reality is the ingress specifications very limited. >>And the reason for this is that getting traffic into the cluster there's a lot of nuance into how you want to do that. And it turns out it's very challenging to create a generic one size fits all specifications because of the vast diversity of implementations and choices that are available to end users. And so you don't see English specifying anything around resilience. So if >>you want to specify a time out or rate limiting, it's not possible in dresses really limited to support for http. So if you're using GSPC or Web sockets, you can't use the ingress specifications, um, different ways of routing >>authentication. The list goes on and on. And so what happens is that different English controllers extend the core ingress specifications to support these use cases in different ways. Yeah, so engine X ingress they actually use a combination of config maps and the English Resource is plus custom annotations that extend the ingress to really let you configure a lot of additional extensions. Um, that is exposing the engineers ingress with Ambassador. We actually use custom resource definitions different CRTs that extend kubernetes itself to configure ambassador. And one of the benefits of the CRD approach is that we can create a standard schema that's actually validated by kubernetes. So when you do a coup control apply of an ambassador CRD coop Control can immediately validate and tell >>you if you're actually applying a valid schema in format for your ambassador configuration on As I previously mentioned, ambassadors built on envoy proxy, >>it's the Gateway also uses C R D s they can to use a necks tension of the service match CRD s as opposed to dedicated Gateway C R D s on again sdo Gateway is built on envoy privacy. So I've been talking a lot about English controllers. But the title of my talk was really about AP gateways and English controllers and service smashed. So what's the difference between an English controller and an AP gateway? So to recap, an immigrant controller processes kubernetes English routing rules and a P I. G. Wave is a central point for managing all your traffic to community services. It typically has additional functionality such as authentication, observe, ability, a >>developer portal and so forth. So what you find Is that not all Ap gateways or English controllers? Because some MP gateways don't support kubernetes at all. S o eso you can't make the can't be ingress controllers and not all ingrates. Controllers support the functionality such as authentication, observe, ability, developer portal >>that you would typically associate with an AP gateway. So, generally speaking, um, AP gateways that run on kubernetes should be considered a super set oven ingress controller. But if the A p a gateway doesn't run on kubernetes, then it's an AP gateway and not an increase controller. Yeah, so what's the difference between a service Machin and AP Gateway? So an AP gateway is really >>focused on traffic into and out of a cluster, so the political term for this is North South traffic. A service mesh is focused on traffic between services in a cluster East West traffic. All service meshes need >>an AP gateway, so it's Theo includes a basic ingress or a P a gateway called the SDO gateway, because a service mention needs traffic from the Internet to be routed into the mesh >>before it can actually do anything Omelet. Proxy, as I mentioned, is the most common proxy for both mesh and gateways. Dr. Enterprise provides an envoy based solution out of the box. >>Uh, SDO Gateway. The reason Dr does this is because, as I mentioned, kubernetes doesn't come package with an ingress. Uh, it makes sense for Dr Enterprise to provide something that's easy to get going. No extra steps required because with Dr Enterprise, you can deploy it and get going. Get exposed on the Internet without any additional software. Dr. Enterprise can also be easily upgraded to ambassador because they're both built on envoy and interest. Consistent routing. Semantics. It also with Ambassador. You get >>greater security for for single sign on. There's a lot of security by default that's configured directly into Ambassador Better control over TLS. Things like that. Um And then finally, there's commercial support that's actually available for Ambassador. SDO is an open source project that has a has a very broad community but no commercial support options. So to recap, ingress controllers and AP gateways are critical pieces of your cloud native stack. So make sure that you choose something that works well for you. >>And I think a lot of times organizations don't think critically enough about the AP gateway until they're much further down the Cuban and a journey. Considerations around how to choose that a p a gateway include functionality such as How does it do with traffic management and >>observe ability? Doesn't support the protocols that you need also nonfunctional requirements such as Does it integrate with your workflow? Do you offer commercial support? Can you get commercial support for this on a P? A. Gateway is focused on north south traffic, so traffic into and out of your kubernetes cluster. A service match is focused on East West traffic, so traffic between different services inside the same cluster. Dr. Enterprise includes SDO Gateway out of the box easy to use but can also be extended with ambassador for enhanced functionality and security. So thank you for your time. Hope this was helpful in understanding the difference between a P gateways, English controllers and service meshes and how you should be thinking about that on your kubernetes deployment

Published Date : Sep 12 2020

SUMMARY :

So with the monolith, you have a very centralized development process. And so you really have a continuous release cycle because instead of synchronizing all your teams really responsible, not just for the coding of that micro service, but also the testing and so this is a huge change, particularly with workflow. You have the yellow folks work on the Yellow Micro Service, and the purple folks work on the Purple Micro Service and maybe just so the gateway consolidates all access to your micro services, So that's one of the really big things around. really need to start thinking much more about maybe a gateway. So ingress is the process of getting traffic from the Internet to services So the most common strategy for routing is coupling an external load balancer But the key requirement for an external load balancer kubernetes to the various pods that are running your micro services. And the reason for this is that the and the So So all these proxies and So H a proxy is managed by a plastic technology Envoy Proxy, the newest entrant to the proxy the reality is the ingress specifications very limited. And the reason for this is that getting traffic into the cluster there's a lot of nuance into how you want to do that. you want to specify a time out or rate limiting, it's not possible in dresses really limited is that different English controllers extend the core ingress specifications to support these use cases So to recap, an immigrant controller processes So what you find Is that not all Ap gateways But if the A p a gateway doesn't run on kubernetes, then it's an AP gateway focused on traffic into and out of a cluster, so the political term for this Proxy, as I mentioned, is the most common proxy for both mesh because with Dr Enterprise, you can deploy it and get going. So make sure that you choose something that works well for you. to choose that a p a gateway include functionality such as How does it do with traffic Doesn't support the protocols that you need also nonfunctional requirements

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Richard LeePERSON

0.99+

2004DATE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

2000DATE

0.99+

Ambassador LabsORGANIZATION

0.99+

each teamQUANTITY

0.99+

Engine X IncORGANIZATION

0.99+

Data WireORGANIZATION

0.99+

each teamQUANTITY

0.99+

each podQUANTITY

0.99+

Native Computing FoundationORGANIZATION

0.99+

todayDATE

0.99+

EnglishOTHER

0.99+

one personQUANTITY

0.98+

SDOTITLE

0.98+

threeQUANTITY

0.98+

oneQUANTITY

0.97+

eachQUANTITY

0.97+

ingressORGANIZATION

0.96+

AmbassadorORGANIZATION

0.96+

PurpleORGANIZATION

0.95+

HarvardORGANIZATION

0.95+

one big thingQUANTITY

0.94+

bothQUANTITY

0.94+

Orange Micro ServiceORGANIZATION

0.93+

one giant thingQUANTITY

0.92+

Purple Micro ServiceORGANIZATION

0.92+

SDOOTHER

0.9+

Next Inc KubernetesORGANIZATION

0.89+

CubanLOCATION

0.89+

one particular vehicleQUANTITY

0.88+

SDO GatewayTITLE

0.86+

three most well established proxiesQUANTITY

0.85+

envoyORGANIZATION

0.85+

purpleORGANIZATION

0.85+

Cooper NanniesORGANIZATION

0.83+

CooperPERSON

0.81+

Yellow Micro ServiceORGANIZATION

0.8+

single signQUANTITY

0.8+

A P a.COMMERCIAL_ITEM

0.77+

hot topicsQUANTITY

0.76+

Launchpad 2020COMMERCIAL_ITEM

0.75+

both mesh andQUANTITY

0.69+

EnvoyTITLE

0.65+

CEOPERSON

0.64+

DrTITLE

0.64+

APORGANIZATION

0.63+

VM Ware ContourTITLE

0.62+

Dr EnterpriseORGANIZATION

0.61+

MirantisORGANIZATION

0.59+

North SouthLOCATION

0.57+

GatewayTITLE

0.54+

folksORGANIZATION

0.54+

VoyagerTITLE

0.5+

Dr. EnterpriseTITLE

0.49+

OmeletTITLE

0.45+

MachinTITLE

0.45+

EnterpriseORGANIZATION

0.43+

Vijoy Pandey, Cisco | KubeCon + CloudNativeCon Europe 2020 - Virtual


 

>> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the CloudNative Computing Foundation, and Ecosystem Partners. >> Hi and welcome back to theCUBE's coverage of KubeCon, CloudNativeCon 2020 in Europe, of course the virtual edition. I'm Stu Miniman and happy to welcome back to the program one of the keynote speakers, he's also a board member of the CNCF, Vijoy Pandey who is the vice president and chief technology officer for Cloud at Cisco. Vijoy, nice to see you and thanks so much for joining us. >> Thank you Stu, and nice to see you again. It's a strange setting to be in but as long as we are both health, everything is good. >> Yeah, it's still a, we still get to be together a little bit even though while we're apart, we love the engagement and interaction that we normally get through the community but we just have to do it a little bit differently this year. So we're going to get to your keynote. We've had you on the program to talk about "Network, Please Evolve", been watching that journey. But why don't we start it first, you know, you've had a little bit of change in roles and responsibility. I know there's been some restructuring at Cisco since the last time we got together. So give us the update on your role. >> Yeah, so that, yeah let's start there. So I've taken on a new responsibility. It's VP of Engineering and Research for a new group that's been formed at Cisco. It's called Emerging Tech and Incubation. Liz Centoni leads that and she reports into Chuck. The role, the charter for this team, this new team, is to incubate the next bets for Cisco. And, if you can imagine, it's natural for Cisco to start with bets which are closer to its core business, but the charter for this group is to mover further and further out from Cisco's core business and takes this core into newer markets, into newer products, and newer businesses. I am running the engineering and research for that group. And, again, the whole deal behind this is to be a little bit nimble, to be a little startupy in nature, where you bring ideas, you incubate them, you iterate pretty fast and you throw out 80% of those and concentrate on the 20% that make sense to take forward as a venture. >> Interesting. So it reminds me a little bit, but different, I remember John Chambers a number of years back talking about various adjacencies, trying to grow those next, you know, multi-billion dollar businesses inside Cisco. In some ways, Vijoy, it reminds me a little bit of your previous company, very well known for, you know, driving innovation, giving engineering 20% of their time to work on things. Give us a little bit of insight. What's kind of an example of a bet that you might be looking at in the space? Bring us inside a little bit. >> Well that's actually a good question and I think a little bit of that comparison is, are those conversations that taking place within Cisco as well as to how far out from Cisco's core business do we want to get when we're incubating these bets. And, yes, my previous employer, I mean Google X actually goes pretty far out when it comes to incubations. The core business being primarily around ads, now Google Cloud as well, but you have things like Verily and Calico and others which are pretty far out from where Google started. And the way we are looking at these things within Cisco is, it's a new muscle for Cisco so we want to prove ourselves first. So the first few bets that we are betting upon are pretty close to Cisco's core but still not fitting into Cisco's BU when it comes to go-to-market alignment or business alignment. So while the first bets that we are taking into account is around API being the queen when it comes to the future of infrastructure, so to speak. So it's not just making our infrastructure consumable as infrastructure's code, but also talking about developer relevance, talking about how developers are actually influencing infrastructure deployments. So if you think about the problem statement in that sense, then networking needs to evolve. And I talked a lot about this in the past couple of keynotes where Cisco's core business has been around connecting and securing physical endpoints, physical I/O endpoints, whatever they happen to be, of whatever type they happen to be. And one of the bets that we are, actually two of the bets that we are going after is around connecting and securing API endpoints wherever they happen to be of whatever type they happen to be. And so API networking, or app networking, is one big bet that we're going after. Our other big bet is around API security and that has a bunch of other connotations to it where we think about security moving from runtime security where traditionally Cisco has played in that space, especially on the infrastructure side, but moving into API security which is only under the developer pipeline and higher up in the stack. So those are two big bets that we're going after and as you can see, they're pretty close to Cisco's core business but also very differentiated from where Cisco is today. And once when you prove some of these bets out, you can walk further and further away or a few degrees away from Cisco's core as it exists today. >> All right, well Vijoy, I mentioned you're also on the board for the CNCF, maybe let's talk a little bit about open source. How does that play into what you're looking at for emerging technologies and these bets, you know, so many companies, that's an integral piece, and we've watched, you know really, the maturation of Cisco's journey, participating in these open source environments. So help us tie in where Cisco is when it comes to open source. >> So, yeah, so I think we've been pretty deeply involved in open source in our past. We've been deeply involved in Linux foundational networking. We've actually chartered FD.io as a project there and we still are. We've been involved in OpenStack. We are big supporters of OpenStack. We have a couple of products that are on the OpenStack offering. And as you all know, we've been involved in CNCF right from the get go as a foundational member. We brought NSM as a project. It's sandbox currently. We're hoping to move it forward. But even beyond that, I mean we are big users of open source. You know a lot of us has offerings that we have from Cisco and you would not know this if you're not inside of Cisco, but Webex, for example, is a big, big user of linger D right from the get go from version 1.0. But we don't talk about it, which is sad. I think for example, we use Kubernetes pretty deeply in our DNAC platform on the enterprise site. We use Kubernetes very deeply in our security platforms. So we are pretty deep users internally in all our SAS products. But we want to press the accelerator and accelerate this whole journey towards open source quite a bit moving forward as part of ET&I, Emerging Tech and Incubation as well. So you will see more of us in open source forums, not just the NCF but very recently we joined the Linux Foundation for Public Health as a premier foundational member. Dan Kohn, our old friend, is actually chartering that initiative and we actually are big believers in handling data in ethical and privacy preserving ways. So that's actually something that enticed us to join Linux Foundation for Public Health and we will be working very closely with Dan and the foundational companies there to, not just bring open source, but also evangelize and use what comes out of that forum. >> All right. Well, Vijoy, I think it's time for us to dig into your keynote. We've spoken with you in previous KubeCons about the "Network, Please Evolve" theme that you've been driving on, and big focus you talked about was SD-WAN. Of course anybody that been watching the industry has watched the real ascension of SD-WAN. We've called it one of those just critical foundational pieces of companies enabling Multicloud, so help us, you know, help explain to our audience a little bit, you know, what do you mean when you talk about things like CloudNative, SD-WAN, and how that helps people really enable their applications in the modern environment? >> Yeah, so, well we we've been talking about SD-WAN for a while. I mean, it's one of the transformational technologies of our time where prior to SD-WAN existing, you had to stitch all of these MPLS labels and actual data connectivity across to your enterprise or branch and SD-WAN came in and changed the game there. But I think SD-WAN as it exists today is application-alaware. And that's one of the big things that I talk about in my keynote. Also, we've talked about how NSM, the other side of the spectrum, is how NSM, or network service mesh, has actually helped us simplify operational complexities, simplify the ticketing and process hell that any developer needs to go through just to get a multicloud, multicluster app up and running. So the keynote actually talked about bringing those two things together where we've talked about using NSM in the past, in chapter one and chapter two, ah chapter two, no this is chapter three and at some point I would like to stop the chapters. I don't want this to be like, like an encyclopedia of networking (mumbling) But we are at chapter three and we are talking about how you can take the same consumption models that I talked about in chapter two which is just adding a simple annotation in your CRD and extending that notion of multicloud, multicluster wires within the components of our application but extending it all the way down to the user in an enterprise. And as you saw an example, Gavin Russom is trying to give a keynote holographically and he's suffering from SD-WAN being application alaware. And using this construct of a simple annotation, we can actually make SD-WAN CloudNative. We can make it application-aware, and we can guarantee the SLOs that Gavin is looking for in terms of 3D video, in terms of file access or audio just to make sure that he's successful and Ross doesn't come in and take his place. >> Well I expect Gavin will do something to mess things up on his own even if the technology works flawly. You know, Vijoy the modernization journey that customers are on is a neverending story. I understand the chapters need to end on the current volume that you're working on. But, you know, we'd love to get your view point. You talk about things like service mesh. It's definitely been a hot topic of conversation for the last couple of years. What are you hearing from your customers? What are some of the the kind of real challenges but opportunities that they see in today's CloudNative space? >> In general, service meshes are here to stay. In fact, they're here to proliferate to some degree and we are seeing a lot of that happening where not only are we seeing different service meshes coming into the picture through various open source mechanisms. You've got Istio there, you've got linger D, you've got various proprietary notions around control planes like App Mesh from Amazon. There's Console which is an open source project But not part of (mumbles) today. So there's a whole bunch of service meshes in terms of control planes coming in on volumes becoming a de facto side car data plane, whatever you would like to call it, de facto standard there which is good for the community I would say. But this proliferation of control planes is actually a problem. And I see customers actually deploying a multitude of service meshes in their environment. And that's here to stay. In fact, we are seeing a whole bunch of things that we would use different tools for. Like API Gate was in the past. And those functions are actually rolling into service meshes. And so I think service meshes are here to stay. I think the diversity of some service meshes is here to stay. And so some work has to be done in bringing these things together and that's something that we are trying to focus in on all as well because that's something that our customers are asking for. >> Yeah, actually you connected for me something I wanted to get your viewpoint on. Dial back you know 10, 15 years ago and everybody would say, "Ah, you know, I really want to have single pane of glass "to be able to manage everything." Cisco's partnering with all of the major cloud providers. I saw, you know, not that long before this event, Google had their Google Cloud show talking about the partnership that you have with Cisco with Google. They have Anthos. You look at Azure has Arc. You know, VMware has Tanzu. Everybody's talking about, really, kind of this multicluster management type of solution out there. And just want to get your viewpoint on this Vijoy is to, you know, how are we doing on the management plane and what do you think we need to do as a industry as a whole to make things better for customers? >> Yeah, but I think this is where I think we need to be careful as an industry, as a community and make things simpler for our customers because, like I said, the proliferation of all of these control planes begs the question, do we need to build something else to bring all of these things together. And I think the SMI apropos from Microsoft is bang on on that front where you're trying to unify at least the consumption model around how you consume these service meshes. But it's not just a question of service meshes. As you saw in the SD-WAN and also going back in the Google discussion that you just, or Google conference that we just offered It's also how SD-WANs are going to interoperate with the services that exist within these cloud silos to some degree. And how does that happen? And there was a teaser there that you saw earlier in the keynote where we are taking those constructs that we talked about in the Google conference and bringing it all the way to a CloudNative environment in the keynote. But I think the bigger problem here is how do we manage this complexity of disparate stacks, whether it's service meshes, whether it's development stacks, or whether it's SD-WAN deployments, how do we manage that complexity? And, single pane of glass is over loaded as a term because it brings in these notions of big, monolithic panes of glass. And I think that's not the way we should be solving it. We should be solving it towards using API simplicity and API interoperability. I think that's where we as a community need to go. >> Absolutely. Well, Vijoy, as you said, you know, the API economy should be able to help on these, you know, multi, the service architecture should allow things to be more flexible and give me the visibility I need without trying to have to build something that's completely monolithic. Vijoy, thanks so much for joining. Looking forward to hearing more about the big bets coming out of Cisco and congratulations on the new role. >> Thank you Stu. It was a pleasure to be here. >> All right, and stay tuned for much more coverage of theCUBE at KubeCon, CloudNativeCon. I'm Stu Miniman and thanks for watching. (light digital music)

Published Date : Aug 18 2020

SUMMARY :

brought to you by Red Hat, Vijoy, nice to see you and nice to see you again. since the last time we got together. and concentrate on the 20% that make sense that you might be looking at in the space? And the way we are looking at and we've watched, you and the foundational companies there to, and big focus you talked about was SD-WAN. and we are talking about What are some of the the and we are seeing a lot of that happening and what do you think we need in the Google discussion that you just, and give me the visibility I need Thank you Stu. I'm Stu Miniman and thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KohnPERSON

0.99+

CiscoORGANIZATION

0.99+

Liz CentoniPERSON

0.99+

CloudNative Computing FoundationORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

Red HatORGANIZATION

0.99+

20%QUANTITY

0.99+

Vijoy PandeyPERSON

0.99+

80%QUANTITY

0.99+

Linux Foundation for Public HealthORGANIZATION

0.99+

GavinPERSON

0.99+

Stu MinimanPERSON

0.99+

VijoyPERSON

0.99+

StuPERSON

0.99+

DanPERSON

0.99+

Emerging TechORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

ET&IORGANIZATION

0.99+

KubeConEVENT

0.99+

first betsQUANTITY

0.99+

Gavin RussomPERSON

0.99+

CloudNativeConEVENT

0.99+

VerilyORGANIZATION

0.99+

RossPERSON

0.99+

EuropeLOCATION

0.99+

ChuckPERSON

0.99+

WebexORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

John ChambersPERSON

0.99+

NSMORGANIZATION

0.98+

CalicoORGANIZATION

0.98+

two big betsQUANTITY

0.98+

bothQUANTITY

0.98+

NCFORGANIZATION

0.98+

VMwareORGANIZATION

0.97+

LinuxTITLE

0.97+

two thingsQUANTITY

0.97+

CloudNativeCon 2020EVENT

0.97+

todayDATE

0.96+

SASORGANIZATION

0.96+

Emerging Tech and IncubationORGANIZATION

0.96+

firstQUANTITY

0.96+

one big betQUANTITY

0.96+

chapter twoOTHER

0.95+

this yearDATE

0.95+

first few betsQUANTITY

0.95+

chapter oneOTHER

0.94+

TanzuORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

chapter threeOTHER

0.93+

Vijoy Pandey, Cisco | kubecon + Cloudnativecon europe 2020


 

(upbeat music) >> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation, and the ecosystem partners. >> Hi, and welcome back to theCUBE's coverage of KubeCon + CloudNativeCon 2020 in Europe, of course, the virtual edition. I'm Stu Miniman, and happy to welcome you back to the program. One of the keynote speakers is also a board member of the CNCF, Vijoy Pandey, who is the Vice President and Chief Technology Officer for Cloud at Cisco. Vijoy, nice to see you, thanks so much for joining us. >> Hi there, Stu, so nice to see you again. It's a strange setting to be in, but as long as we are both healthy, everything's good. >> Yeah, we still get to be together a little bit even though while we're apart. We love the the engagement and interaction that we normally get to the community, but we just have to do it a little bit differently this year. So we're going to get to your keynote. We've had you on the program to talk about "Networking, Please Evolve". I've been watching that journey. But why don't we start at first, you've had a little bit of change in roles and responsibility. I know there's been some restructuring at Cisco since the last time we got together. So give us the update on your role. >> Yeah, so let's start there. So I've taken on a new responsibility. It's VP of Engineering and Research for a new group that's been formed at Cisco. It's called Emerging Tech and Incubation. Liz Centoni leads that and she reports on to Chuck. The charter for the team, this new team, is to incubate the next bets for Cisco. And if you can imagine, it's natural for Cisco to start with bets which are closer to its core business. But the charter for this group is to move further and further out from Cisco's core business and take Cisco into newer markets, into newer products, and newer businesses. I'm running the engineering and resource for that group. And again, the whole deal behind this is to be a little bit nimble, to be a little bit, to startupy in nature, where you bring ideas, you incubate them, you iterate pretty fast, and you throw out 80% of those, and concentrate on the 20% that makes sense to take forward as a venture. >> Interesting. So it reminds me a little bit but different, I remember John Chambers, a number of years back, talking about various adjacencies trying to grow those next multi-billion dollar businesses inside Cisco. In some ways, Vijoy, it reminds me a little bit of your previous company, very well known for driving innovation, giving engineers 20% of their time to work on things, maybe give us a little bit insight, what's kind of an example of a bet that you might be looking at in this space, bring us in tight a little bit. >> Well, that's actually a good question. And I think a little bit of that comparison is all those conversations are taking place within Cisco as well as to how far out from Cisco's core business do we want to get when we're incubating these bets? And yes, my previous employer, I mean, Google X actually goes pretty far out when it comes to incubations, the core business being primarily around ads, now Google Cloud as well. But you have things like Verily and Calico, and others, which are pretty far out from where Google started. And the way we're looking at the these things within Cisco is, it's a new muscle for Cisco, so we want to prove ourselves first. So the first few bets that we are betting upon are pretty close to Cisco's core but still not fitting into Cisco's BU when it comes to, go to market alignment or business alignment. So one of the first bets that we're taking into account is around API being the queen when it comes to the future of infrastructure, so to speak. So it's not just making our infrastructure consumable as infrastructure as code but also talking about developer relevance, talking about how developers are actually influencing infrastructure deployments. So if you think about the problem statement in that sense, then networking needs to evolve. And I've talked a lot about this in the past couple of keynotes, where Cisco's core business has been around connecting and securing physical endpoints, physical I/O endpoints, wherever they happen to be, of whatever type they happen to be. And one of the bets that we are, actually two of the bets, that we're going after is around connecting and securing API endpoints, wherever they happen to be, of whatever type they happen to be. And so API networking or app networking is one big bet that we're going after. Another big bet is around API security. And that has a bunch of other connotations to it, where we think about security moving from runtime security, where traditionally Cisco has played in that space, especially on the infrastructure side, but moving into API security, which is earlier in the development pipeline, and higher up in the stack. So those are two big bets that we're going after. And as you can see, they're pretty close to Cisco's core business, but also are very differentiated from where Cisco is today. And once you prove some of these bets out, you can walk further and further away, or a few degrees away from Cisco's core. >> All right, Vijoy, why don't you give us the update about how Cisco is leveraging and participating in open source? >> So I think we've been pretty, deeply involved in open source in our past. We've been deeply involved in Linux Foundation Networking. We've actually chartered FD.io as a project there and we still are. We've been involved in OpenStack, we have been supporters of OpenStack. We have a couple of products that are around the OpenStack offering. And as you all know, we've been involved in CNCF, right from the get-go, as a foundation member. We brought NSM as a project. I had Sandbox currently, but we're hoping to move it forward. But even beyond that, I mean, we are big users of open source, a lot of those has offerings that we have from Cisco, and you will not know this if you're not inside of Cisco. But Webex, for example, is a big, big user of Linkerd, right from the get-go, from version 1.0, but we don't talk about it, which is sad. I think, for example, we use Kubernetes pretty deeply in our DNAC platform on the enterprise side. We use Kubernetes very deeply in our security platforms. So we're pretty good, pretty deep users internally in our SaaS products. But we want to press the accelerator and accelerate this whole journey towards open source, quite a bit moving forward as part of ET&I, Emerging Tech and Incubation, as well. So you will see more of us in open source forums, not just CNCF, but very recently, we joined the Linux Foundation for Public Health as a premier foundational member. Dan Kohn, our old friend, is actually chartering that initiative, and we actually are big believers in handling data in ethical and privacy-preserving ways. So that's actually something that enticed us to join Linux Foundation for Public Health, and we will be working very closely with Dan and foundational companies that do not just bring open source but also evangelize and use what comes out of that forum. >> All right, well, Vijoy, I think it's time for us to dig into your keynote. We've we've spoken with you in previous KubeCons about the "Network, Please Evolve" theme that you've been driving on. And big focus you talked about was SD-WAN. Of course, anybody that's been watching the industry has watched the real ascension of SD-WAN. We've called it one of those just critical foundational pieces of companies enabling multi-cloud. So help explain to our audience a little bit, what do you mean when you talk about things like Cloud Native SD-WAN and how that helps people really enable their applications in the modern environment? >> Yes, well, I mean, we've been talking about SD-WAN for a while. I mean, it's one of the transformational technologies of our time where prior to SD-WAN existing, you had to stitch all of these MPLS labels and actually get your connectivity across to your enterprise or branch. And SD-WAN came in and changed the game there, but I think SD-WAN, as it exists today, is application-unaware. And that's one of the big things that I talk about in my keynote. Also, we've talked about how NSM, the other side of the spectrum, is how NSM or Network Service Mesh has actually helped us simplify operational complexities, simplify the ticketing and process health that any developer needs to go through just to get a multi-cloud, multi-cluster app up and running. So the keynote actually talked about bringing those two things together, where we've talked about using NSM in the past in chapter one and chapter two. And I know this is chapter three, and at some point, I would like to stop the chapters. I don't want this like an encyclopedia of "Networking, Please Evolve". But we are at chapter three, and we are talking about how you can take the same consumption models that I talked about in chapter two, which is just adding a simple annotation in your CRD, and extending that notion of multi-cloud, multi-cluster wires within the components of our application, but extending it all the way down to the user in an enterprise. And as we saw an example, Gavin Belson is trying to give a keynote holographically and he's suffering from SD-WAN being application-unaware. And using this construct of a simple annotation, we can actually make SD-WAN cloud native, we can make it application-aware, and we can guarantee the SLOs, that Gavin is looking for, in terms of 3D video, in terms of file access for audio, just to make sure that he's successful and Ross doesn't come in and take his place. >> Well, I expect Gavin will do something to mess things up on his own even if the technology works flawlessly. Vijoy, the modernization journey that customers are on is a never-ending story. I understand the chapters need to end on the current volume that you're working on, but we'd love to get your viewpoint. You talk about things like service mesh, it's definitely been a hot topic of conversation for the last couple of years. What are you hearing from your customers? What are some of the kind of real challenges but opportunities that they see in today's cloud native space? >> In general, service meshes are here to stay. In fact, they're here to proliferate to some degree, and we are seeing a lot of that happening, where not only are we seeing different service meshes coming into the picture through various open source mechanisms. You've got Istio there, you've Linkerd, you've got various proprietary notions around control planes like App Mesh, from Amazon, there's Consul, which is an open source project, but not part of CNCF today. So there's a whole bunch of service meshes in terms of control planes coming in. Envoy is becoming a de facto sidecar data plane, whatever you would like to call it, de facto standard there, which is good for the community, I would say. But this proliferation of control planes is actually a problem. And I see customers actually deploying a multitude of service meshes in their environment, and that's here to stay. In fact, we are seeing a whole bunch of things that we would use different tools for, like API gateways in the past, and those functions actually rolling into service meshes. And so I think service meshes are here to stay. I think the diversity of service meshes is here to stay. And so some work has to be done in bringing these things together. And that's something that we are trying to focus in on as well. Because that's something that our customers are asking for. >> Yeah, actually, you connected for me something I wanted to get your viewpoint on, go dial back, 10, 15 years ago, and everybody would say, "Oh, I really want to have a single pane of glass "to be able to manage everything." Cisco's partnering with all of the major cloud providers. I saw, not that long before this event, Google had their Google Cloud Show, talking about the partnership that you have with, Cisco with Google. They have Anthos, you look at Azure has Arc, VMware has Tanzu. Everybody's talking about really the kind of this multi-cluster management type of solution out there, and just want to get your viewpoint on this Vijoy as to how are we doing on the management plane, and what do you think we need to do as an industry as a whole to make things better for customers? >> Yeah, I think this is where I think we need to be careful as an industry, as a community and make things simpler for our customers. Because, like I said, the proliferation of all of these control planes begs the question, do we need to build something else to bring all these things together? I think the SMI proposal from Microsoft is bang on on that front, where you're trying to unify at least the consumption model around how you consume these service meshes. But it's not just a question of service meshes as you saw in the SD-WAN announcement back in the Google discussion that we just, Google conference that you just referred. It's also how SD-WANs are going to interoperate with the services that exist within these cloud silos to some degree. And how does that happen? And there was a teaser there that you saw earlier in the keynote where we are taking those constructs that we talked about in the Google conference and bringing it all the way to a cloud native environment in the keynote. But I think the bigger problem here is how do we manage this complexity of this pallet stacks? Whether it's service meshes, whether it's development stacks, or whether it's SD-WAN deployments, how do we manage that complexity? And single pane of glass is overloaded as a term, because it brings in these notions of big monolithic panes of glass. And I think that's not the way we should be solving it. We should be solving it towards using API simplicity and API interoperability. And I think that's where we as a community need to go. >> Absolutely. Well, Vijoy, as you said, the API economy should be able to help on these, the service architecture should allow things to be more flexible and give me the visibility I need without trying to have to build something that's completely monolithic. Vijoy, thanks so much for joining. Looking forward to hearing more about the big bets coming out of Cisco, and congratulations on the new role. >> Thank you, Stu. It was a pleasure to be here. >> All right, and stay tuned for lots more coverage of theCUBE at KubeCon + CloudNativeCon. I'm Stu Miniman. Thanks for watching. (upbeat music)

Published Date : Jul 28 2020

SUMMARY :

and the ecosystem partners. One of the keynote speakers nice to see you again. since the last time we got together. and concentrate on the 20% that that you might be And one of the bets that we are, that are around the OpenStack offering. in the modern environment? And that's one of the big of conversation for the and that's here to stay. as to how are we doing and bringing it all the way and congratulations on the new role. It was a pleasure to be here. of theCUBE at KubeCon + CloudNativeCon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KohnPERSON

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Liz CentoniPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

StuPERSON

0.99+

ChuckPERSON

0.99+

80%QUANTITY

0.99+

Stu MinimanPERSON

0.99+

GavinPERSON

0.99+

20%QUANTITY

0.99+

Linux Foundation for Public HealthORGANIZATION

0.99+

VijoyPERSON

0.99+

Gavin BelsonPERSON

0.99+

EuropeLOCATION

0.99+

ET&IORGANIZATION

0.99+

Emerging TechORGANIZATION

0.99+

NSMORGANIZATION

0.99+

Vijoy PandeyPERSON

0.99+

CNCFORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

VerilyORGANIZATION

0.99+

two big betsQUANTITY

0.99+

John ChambersPERSON

0.99+

CalicoORGANIZATION

0.99+

KubeConEVENT

0.99+

oneQUANTITY

0.99+

VMwareORGANIZATION

0.99+

RossPERSON

0.99+

10DATE

0.99+

one big betQUANTITY

0.98+

OneQUANTITY

0.98+

WebexORGANIZATION

0.98+

this yearDATE

0.98+

two thingsQUANTITY

0.97+

Linux Foundation for Public HealthORGANIZATION

0.97+

CloudNativeConEVENT

0.97+

LinkerdORGANIZATION

0.97+

bothQUANTITY

0.97+

firstQUANTITY

0.97+

chapter threeOTHER

0.97+

TanzuORGANIZATION

0.96+

todayDATE

0.96+

IncubationORGANIZATION

0.94+

ArcORGANIZATION

0.94+

Emerging Tech and IncubationORGANIZATION

0.94+

first betsQUANTITY

0.93+

KubeConsEVENT

0.93+

betsQUANTITY

0.93+

chapter twoOTHER

0.92+

FD.ioORGANIZATION

0.92+

two ofQUANTITY

0.92+

first few betsQUANTITY

0.91+

chapter threeOTHER

0.9+

AnthosORGANIZATION

0.9+

Deepak Singh, AWS & Abby Fuller, AWS | AWS re:Invent 2019


 

>> Narrator: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with it's ecosystem partners. >> Welcome back, about 65,000 here in attendance, at AWS re:Invent 2019. You're watching theCUBE, and I am Stu Miniman, the host for this seg, and happy to welcome back to our program two of our CUBE alumni. Sitting to my right is Abby Fuller, who is the principal technologist for containers and Linux, with Amazon Web Services. Sitting to her right is Deepak Singh, Vice President of Compute Services, also with AWS. Thank you so much for joining us on the program. >> Thanks for having us. >> Thank you for having us. >> Stu: All right, so as I said, both of you have been on the program, and boy your team's been busy. I mean, one of the things I love, first of all, there is a roadmap for many of the things that are going on. So, we do understand what's happen in the future, but, Deepak, maybe just tell us a little bit about your group and kind of the main focus, and let's start there. >> Deepak: So, my group goes beyond containers. It includes things like Linux systems, our high performance computing organization. But for the purposes of re:Invent, let's stick to the containers org. The containers org owns all of AWS's containerized products. So that includes ECS, EKS, Fargate. We also own our service mesh offering, which is App Mesh. So the way I like to think about it is, it's the right way to build applications in the modern era group, and it's a team that stays quite busy, because this is such a hot space to be in. >> Stu: All right, so we're going to talk mostly about containers, but your shirt is talking about the Linux piece. Tell us what your shirt says. >> Deepak: Ahh, yes, this is the only right way to spell AMI. Unfortunately, my previous, when I was in New York, Corey was at the table interviewing me, and I wore this just for him. >> Stu: So, so, so, if it is AMI, then we're going to spend some time talking about EKS. >> Yes. (Abby chuckling) >> And Esses. >> Yes, which one? (Deepak laughing) We will figure that. For AWS is AWS, I think, is how we will do it. So, absolutely, we're not going to talk about ontological arguments in there. But, Abby, a whole lot of new services in the container space. I want to put a pin and put Fargate to aside for a second. >> Abby: Sure. >> Cause lots of things we want to dig into there. But a lot of other things have been announced, in like the last month or so. Maybe, give us a little bit of a view. >> Yeah, I think a couple big ones for us. So, Fargate and Spot, so run on spare Fargate Capacity for up to a 70% discount off of standard Fargate pricing. (mumbling) things like vulnerability image for scanning for images on ECR. We launched, over the last few days as re:Invent, a capacity providers for ECS, which let's you run, split your traffic between on-demand and spot instances in the same cluster. We also launched something called Cluster Auto Scaler. So, some finer-grained control over how your cluster scales in on ECS. >> Stu: All right, want to take a quick step back. So , Fargate, announced a couple of years ago. >> Deepak: Yep. >> Was only first supported on ECS. Definitely, I've talked to lots of customers, very excited about it. >> Deepak: Yep. >> Maybe talk to us a little bit about how Fargate fits in the whole container discussion. >> Deepak: Yeah. >> And we'll hit with the news. >> Yeah, and, actually, a good way to think about it is from a native US standpoint. If you're a customer running containers, the way we think about our services is: You need a place to store those containers, so that's ECR. You could use your own registry, you could pick a third party one, that's fine. But most of our customers just use ECR. Then you pick your containers carrier. That's either ECS or EKS depending on your preferences. And then you need to figure out where you want to run your containers. And, of course, when we launched ECS five years ago, at re:Invent, there was only one way to do it: On EC2 instances. And two years ago, we added in what in our mind is a cloud native natural way to run containers, which is Fargate. So Fargate serves as a runtime compute engine for containers, and you can pick your scheduler on top of it, and go make hay with your applications. So that's kind of how we think the hierarchy works, and it works pretty well for most customers. They'll start off often with EC2 and move to Fargate over time or mix and match, and it's kind of fascinating to see how many customers of ours have decided they want to be all-in on Fargate. Which is a great place to be for us. >> Stu: Okay, but the big news which actually got a good cheer in the key note yesterday, is Fargate for EKS. So what's the importance of this? >> Yeah I think (mumbling) I think it's saying we've been talking to customers about for a while and it's the ability to run your Kubernetes pods on Fargate Capacity. I think it's really speaking to folks love Kubernetes as a tool and as a community, but it can be a pretty significant lift operationally. And with Fargate they can use APIs that they want or the open source tooling that they want but they don't have to worry about provisioning and managing that EC2 capacity. >> Stu: All right, so Deepak I actually was having a conversation with a good AWS customer, yesterday, and he said he actually started out on Kubernetes before EKS existed, on AKS. And migrated over to AWS when EKS became available. And he said Fargate really interests me, but one of the main reasons he does Kubernetes is he wants to have some portability, has some concerns that, he knows what services he uses and how if he needed to move something there, what do you say to customer that says Fargate's interesting me, but I'm concerned I'm going to get locked in if I buy into this model. >> I would say that he shouldn't worry about it, because of two reasons: maybe more than two. One is: the unit in Fargate that you interact with and work on is the same unit that you interact and work on with Kubernetes in general. Which is the Kubernetes pod. It's the broadspec, it's just a pod, no difference. You can take that same pod and run it on Timbuktu cloud and it will still run. So that's part one. The other one is that he's using the same tools, he's using coup CDL. And in fact you can mix and match your Kubernetes casters. You can run 95% of the application on Fargate, and five percent of it on EC2. All they are doing is changing the part annotation, and if you decide you want to run none of it on Fargate, you just flip that and suddenly everything is running on EC2 capacity. So actually think there's that much to worry about, because it's just the same pod. It's still the same tooling, the operational model is a lot simpler. >> So Abby, we've talked to you at DockerCon, and KubeCon, simplicity is not the word that we hear when we talk about this whole container space. >> Abby: Sure. >> Traditionally. How are we doing overall? I mean, I'm watching the community here, and it's like, wait, Fargate sounds cool but where's my persistent volumes? You know, where are we in, you know give us a little bit of the road map as to where we are to make this, you know, simple and managing more of my environment. >> Yeah, I think the way that I like to look at it, right, is that we've spent, and it's not just us, but we spent a lot of time looking at things like patterns and abstractions that help make these work flows easier for developers. And I think one of the launches that's interesting in that vein is the ECS CLI version two, which we launched a few days ago. And that will help you deploy like a production ready containerized application. It'll help you with the CICD angle, it'll help you with the monitoring and the observability. So I think it's about abstracting away, and adding patterns on top to make some of these common operations and work flows really modular and repeatable, and extendable. And then it's about having the ability to customize where I need to. So being able to run on Fargate, but also to use work loads running on EC2 where I need to, and being able to mix and match, and to focus my energy where I really get any benefit from customizing, rather than having to do the whole thing from the ground up. >> Stu: You know, feedback I've gotten from my friends and the app dev community, is that hybrid is more and more becoming a standard deployment model. Obviously things like outposts and some of the other solutions from Amazon are extending the AWS model of doing things, but many of them also look at just Kubernetes, >> Deepak: Yep >> as a layer to do that. How should we be thinking of this from your solutions? >> Deepak: Yeah, so I thought without both, though, if you noticed in Andy's announcement yesterday, among the list of services available on day one were ECS and EKS. And actually app meshes well weren't on the list, but app meshes available on our post on day one as well. I think when we think about customers who want to run and stay in their own capacity and their own data centers, because EKS is built on (mumbling) Kubernetes with no modifications, the same application, as long as they're running on upstream Kubernetes, on their side, will just run on EKS. And there's a number of models that work there. A great model is the kind that SisCo is running, where they will manage it for you in both places. They become the first person you call, and on AWS it's just EKS. And on premise (mumbling) it's what SisCo has decided to build. Our pro-serf team will also help you by example. So I think there's a number of modes that work there but the key part, and it's the reason why we have stayed with (mumbling) stream Kubernetes, is we never want to make someone say, oh we can't use EKS because they're (mumbling). Somehow modified Kubernetes, and I think that is super important for us. >> Stu: Yeah, I mean Abby I know you're an active participant in the community, what do you say to people that look at Amazon, Deepak you talked a little bit about Fargate. You don't need to be concerned to the same images, so speak a little bit, maybe if you could, to Amazon's community participation, and what you're generally hearing from your customers. >> Abby: Yeah, so I think the root of it right is that we're all building with the same building blocks. I think something that Amazon has been really strong at is open sourcing primitive. So, Firecracker last year, I think was a good example. And we, I think we do really well with saying we built this to solve a problem for us, but we think you might want it too. And in terms of community support, we have been open sourcing more over the last year, we open source our road maps in November last year. We run developer previews off the GitHub road map, App Mesh has a public preview channel as well, so we've been trying to involve the community participation earlier and earlier in our product development life cycle, so that, especially with things like service mesh, where it's really pretty new, we can make sure that we have the voice of all our users and our customers, and there, as early as possible. But to get their hands on keyboards to try it out as soon as they can. >> Deepak: And actually a great example of that is, a word that Weave Works has done. Talking about people who can run Kubernetes on AWS and on premises, they have this project called "Weave Ignite" where they're basically running Kubernetes on Firecracker on premises. And then on AWS a customer just runs on EKS, as an example. And that, I think that part has been not everybody realizes that this is possible. But I think the fact that people are doing it is, excites us a lot. >> Stu: All right, I know you're both meeting with a lot of customers this week, maybe Deepak start with you. Any surprises or any misconceptions other than I know there a lot of people wearing teal shirts, with a certain pronunciation. But bring us inside some of the mind set of your customers here. >> Deepak: So actually, our conversation is very consistent. I think the community as a whole, our customer base has a whole, they all want to get to the same place. How can we move really quickly? How can we give our developers the ability to be more productive? Without putting our company at risk, having the right level of governance? Having the right controls, in place? And I think that's mainly consistent theme across the board. I guess the one thing that would be hard to remind people of a little bit, is a lot of people often think Fargate sits on top of ECS and EKS, it sits below that, and actually the fact that now there is an EKS Fargate, people understand that more quickly. Before that it was a little trickier. But other than that, I think our customers almost all. They come from different places, have very similar problems, they want developers to move quickly and develop deliver business value, and platform engineering teams that we speak to want to figure out how to get out of the way. And that's been great! >> It's interesting, Abby, I love your view point from the developer community Andy talked on stage about very much, to do true transformation, there needs to be the leadership driving things down. I'm curious what you're seeing, customers you're talked to, people you had, cause many of these tools we're talking about, you know, started in the developer world. >> Yeah, I mean there's been, like an increasing amount of curiosity around the cultural side of it. So how can I get my team to work like that? How can I get my team to ship more safely, more quickly, but getting operations out of the way? And I think you see more and more interest in that. So how can we build the tools that work the way our developers do? So we get all the thing that we want, so security and compliance and availability. The developers get what they want, which is easy work flows that match the way they want to work. So you see a lot of curiosity around that. So how do we get to the place where we can run everything on Fargate, and benefit from all the new serverless, severless style (mumbling). >> Stu: All right, real quick just give you the final word. Any websites, or events, or things that people should know when they want to learn more and get engaged? >> Yeah, I think I'd send people first and foremost to the GitHub public road maps. It is the easiest, fastest way to let us hear your voice, and what you want to see us build next. I think especially these next couple weeks coming out of re:Invent, as people start to get their hands on what we announced, think I'm really curious for them to take that back, and then be like, this is great, but here's what I want to see next. And I'd love to see that happen on the road maps. >> Yeah, about a month or so ago, maybe a couple months, we started a dedicated blog for containers on AWS site. One of the nice things about it is a lot of the contributors to that blog site are principal engineers, and engineers in our organization. For example, one of our, the principal engineers in my org are Malcolm Featonby, has a whole blog post on how should to think about scaling and best practices. I think I would encourage people who've now seen what we have, all the new services we're developing, and that's where you'll get the details on how you can use them, how we built them, and I encourage everybody to go to that blog site and check out what we're doing. >> Stu: All right, Deepak, Abby, congratulation to you and your team, great progress, and really appreciate (mumbling) are able to look at the road map, and definitely hope to catch up with you both soon. >> Abby: Thanks so much! >> Thank you so much. >> Stu: All right, I'm Stu Miniman, and back with much more, right in a second, thank for watching theCube. (Techno music)

Published Date : Dec 5 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, and happy to welcome back to our program on the program, and boy your team's been busy. So the way I like to think about it is, Stu: All right, so we're going to talk and I wore this just for him. then we're going to spend some time talking about EKS. in the container space. in like the last month or so. which let's you run, split your traffic between Stu: All right, want to take a quick step back. Definitely, I've talked to lots of customers, Maybe talk to us a little bit about how Fargate fits and it's kind of fascinating to see Stu: Okay, but the big news which actually and it's the ability to run your Kubernetes pods and how if he needed to move something there, So actually think there's that much to worry about, and KubeCon, simplicity is not the word that we hear as to where we are to make this, you know, and to focus my energy where I really get any benefit and the app dev community, is that hybrid as a layer to do that. is running, where they will manage it for you and what you're generally hearing from your customers. but we think you might want it too. And that, I think that part of your customers here. and platform engineering teams that we speak to there needs to be the leadership driving things And I think you see more and more Stu: All right, real quick just give you and foremost to the GitHub public road maps. a lot of the contributors to that blog site and definitely hope to catch up with you both soon. and back with much more, right in a second,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DeepakPERSON

0.99+

AWSORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Abby FullerPERSON

0.99+

Deepak SinghPERSON

0.99+

AmazonORGANIZATION

0.99+

New YorkLOCATION

0.99+

Stu MinimanPERSON

0.99+

Malcolm FeatonbyPERSON

0.99+

95%QUANTITY

0.99+

AndyPERSON

0.99+

CoreyPERSON

0.99+

two reasonsQUANTITY

0.99+

five percentQUANTITY

0.99+

AbbyPERSON

0.99+

November last yearDATE

0.99+

StuPERSON

0.99+

last yearDATE

0.99+

IntelORGANIZATION

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

ECRTITLE

0.99+

five years agoDATE

0.98+

SisCoORGANIZATION

0.98+

USLOCATION

0.98+

twoQUANTITY

0.98+

two years agoDATE

0.98+

both placesQUANTITY

0.98+

firstQUANTITY

0.98+

this weekDATE

0.98+

ECSTITLE

0.98+

LinuxTITLE

0.97+

DockerConORGANIZATION

0.97+

one wayQUANTITY

0.97+

FargateORGANIZATION

0.96+

EKSTITLE

0.96+

more than twoQUANTITY

0.96+

KubernetesTITLE

0.96+

FargateTITLE

0.95+

EC2TITLE

0.95+

Matt Klein, Lyft | KubeCon 2018


 

>> Live from Seattle, Washinton it's theCUBE, covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the Cloud Native Computing Foundation, and it's ecosystem partners. >> Hey, welcome back everyone. We're live here at KubeCon, Cloud Native. This is theCUBE's live coverage of three days of three days of wall to wall coverage. Day two, I'm John Furrier with Stu Miniman. Our next guess is an end user, also a program chair of EnvoyCon, which is sold out. Matt Klein, software engineer with Lyft. Great to have you on again, good to see you. Thanks for spending the time. >> Thank you great to be here. >> I know you've been busy, your voice is getting hoarse. You guys had a successful EnvoyCon, sold out. Was on the front-end of KubeCon and CloudNativeCon. Interesting, right? This is the rising tide. What's going on? How'd that go? Why all the interest? >> It's been I continue to be blown away by the overall reaction. So we had EnvoyCon on Monday. We had, I think almost 350 people come, sold out. I think we could have had a larger room if it was available, but we didn't. Just amazing to walk around this conference and see all the cloud vendors getting behind Envoy, lots of companies building on top of Envoy, all of the end users. It just seems to be everywhere here and to have only been open source for a little over two years, I mean it's just unbelievable. >> Matt you know I think a year ago service mesh was something we were still getting the basic understanding of what it was and it definitely, there's certain interviews we've done this week, you know service mesh, you know Envoy, thing likes Istio are going to be even bigger than Kubernetes. >> Yeah, well you know I've been to the last few KubeCons and every KubeCon, I think that it can't get much bigger or more nuts, and no, no. Everyone seems to be a little bit crazier. But no, just from the community perspective, EnvoyCon was fantastic because we had mostly end user talks so it was really fun to get people together and to see all the different things they're building on top of Envoy. >> One of the things that's impressive and I think is a real notable story, and of course we talked about it a bit last time you were on, is that Lyft as an end user kind of encapsulates and epitomizes kind of the innovation building going on. A lot of people have been building a lot of cool stuff using cloud look and getting down and dirty and rolling their own. And actually creating business value, not in a classic IT by IT, just build IT, build systems >> Yep >> To build business value and then donating it in to scale up with the community is pretty notable so congratulations on that. >> Thanks. >> Now you have startups kind of acting the same way so the line between a vendor and end user is certainly changing. I mean, we were end users. Well they're all kind of end users. This is a dynamic that is, I think notable for this generation and it's real. Talk about that dynamic because I think this is a real success story and also a trend in the industry. >> You know so I think for us what's fun for me about not only building Envoy but seeing how it's evolved is really what you said is that I like solving actual problems for people, right? We can have different opinions on what the different vendors are doing, of course. There's lots of people doing different things, but for me at least working at a company like Lyft it's super fun to be able to build technology that solves specific problems that the business is actually happening. Now if something becomes successful sure we're going to see a lot of vendors come in hopefully build products that can help other folks. The way that I look at it and this has been an interesting evolution for me over the last year is I would say a year ago, people would come to me and say "Hey Matt, I've heard about Envoy I'd like to use to help solve some problems and I went to the website and I don't understand it, like it's too complicated to use. The documentation is not good enough." And I think over the last year my thinking has evolved a little bit in the sense that we've seen so many people or end users or companies build fantastic products on top of Envoy and I think one of the reasons Envoy's become so successful is that it's a building block that other people can come and add vertical value. So whether that's a more sophisticated internet company like Lyft or a vendor or a cloud vendor. I think that's what's made the community so successful is that we can build this base thing and it's amazing but then we can allow people to add vertical value. >> And you know that's an interesting dynamic of both cloud and open source. You look at Amazon, the most successful public cloud Their core building blocks was EC2 and S3 originally. Open source is about building on top of other things. Again the dynamic between open source and cloud scale is really kind of the magic. >> Well and just in terms of how we actually go through and I think fund some of these projects ends up being very interesting. Just in the sense that we have a lot of full time people working on Envoy and they're working on it actually for different reasons. We have people working on it as end users, we have people working on it because they're building vertical products but in the end everyone wins because the base technology stays technology focused. I think that has been what has been successful, is that we allow people to succeed in different ways. >> Alright, so Matt, you're at the forefront of one of the most difficult problems that we're looking at these days. It's scale, distributed systems, and edge and how that ties in. I want to get your kind of macro level viewpoint as to how we're doing in this industry? What are some of those tough challenges we've talked about? We talk about things like IoT and Edge and vehicles of course have a lot of them. >> Yeah so I mean, I think when you say scale there's two things that comes to mind. There's physical scale, and I do agree actually that we are continuing to push more compute out to the edge and in fact, I talked about this a little at EnvoyCon, but I have some very exciting projects or plans to bring Envoy actually to mobile phones and to Edge devices starting next year. I'll have more to say about that in the spring. I'm very excited about that. I do think there's a lot of opportunity to better evolve how we ingress data from the edge, how we do compute out at the edge, a bunch of other things. And I think Envoy will be at the forefront of that but when you talk about scale I still think that there's a lot of human scale involved of how we scale the number of developers that are working on all of these architectures. And I do think that Service Mesh and Kubernetes and a bunch of other stuff ultimately if we're successful it helps us grow the number of product developers that can successfully work on these systems. I still think we have a long way to go but I think that's one of those areas where I think some of these technologies help people both at physical internet scale but also at human scale. >> Well I really appreciate your work you're doing. Your contributions to the community, both on solving the problems with Envoy and also being the program chair of EnvoyCon I think is going to be great for the community. I got to ask you as you get pulled into a lot of these, I won't say political, or media kind of conversations you got to kind of be a helicopter and get above and get high level and talk to people who are discovering and learning for the first time which is part of what communities do. How do you talk about those other end users that say "Hey Matt, I'm going to reshape our company, I'm going to reshape their IT investments all based on open source and I really want to learn more about Envoy and just the benefits of Cloud Native in general. I got to go, and I'm a believer, I got to go talk to some wanna-believers or non-believers in my company and I got to make my point home?" How do they be successful? What's your advice to that? Because that's a challenge a lot of people are having. >> I totally agree My advice, first and foremost, is to start by understanding what problems are trying to be solved. And I actually think that sounds very obvious but I think that people don't do it enough because I think sometimes we come to conferences like this and we see all the amazing technology that people are building and it seems fantastic but if one tries to adopt everything that they see here without understanding the incremental steps and the things that are the problems that are being solved that can be very problematic. >> It's a new kind of technical depth. It's kind of a new way >> My advice is to start with what are the actual problems, right? And whether that be observability issues, or authentication issues, or security issues, or whatever, is to start with the problems and then work backwards and my advice is always incremental, no big bang. And try to figure out the right incremental path of adopting the smallest piece of technology that solves a particular problem and go from there. >> And build economies of scale to the mission. >> Right, and whether that means working with a vendor or working with the raw open source technology that's a personal decision of each company to figure out what their comfort level is. But that really is my advice, is start with the problem statement and then figure out the easiest and the quickest incremental path forward. >> The trends that we're seeing Stu was talking earlier, a lot of hyper-scalers here, a lot of diversity coming into the community just what's the hallway conversation amongst the people in the community around as the community grows larger? I mean open source community core persona or constituency, then you got the down-stream impact of that is IT is changing, developers are coming in. So it's not so much changing personas and target audiences of the environment. Open source is still core. That's kind of the down-stream impacts. So you're seeing a lot of people come in, IT people, new developers. How does the community look at that? What's your view on how to engage but also not alienate new people? >> Well I think ultimately we are attempting to build systems help people be successful and be more productive, right? I think the natural evolution of that is bringing some of this technology into the enterprise. We have to recognize that as the community scales the base line level of knowledge is different. I mean we all come at it with different understanding of whether it be networking or orchestration or security. And I think what I would say is that we're never going to build one technology that makes everyone happy. It is impossible. It's impossible to build a technology that satisfies both the expert user and the entry level user. So I believe that we need to build layered technologies, layered abstraction that allow people to plug in at different levels and some of them are more opinionated than others. And I think it is recognizing and supporting a community that has base level technology, has vendors adding value at different layers to help people, and really just respecting the fact that people come at it with different levels. >> I mean application assembly is really where it's going. >> Exactly, I agree >> Matt, I'm wondering if you could reflect back for us. You're the creator of Envoy, I saw you up on stage yesterday, the supportive team and the community that helped this grow. And you've reached graduation. What does that mean to you, for the team? It's different than a school graduation, this is not the end of something, you don't get a diploma out of it. >> Is there a party? >> I don't know if there was. I don't think they invited me. >> Get pictures? >> Cloud Foundation picking up the bar tab? >> I don't know, maybe. So like from a project perspective, in terms of how we go about our day to day I don't think that much changes. I think we have been operating as a mature graduated level project probably for quite some time, in terms of adoption and methodology and stuff like that. I think what graduation means for the project is it's a vote of respect from the larger industry and the community that Envoy isn't going to disappear, it's not going to become an abandoned project on GitHub if for example if Lyft stops investing in it. I think we've reached a critical mass of project success and I think what that means is that it allows folks that may be at more conservative organizations who may be a little later to adopt newer technologies to give them the confidence that says Envoy is not going disappear, that we can potentially bet some of our future on Envoy. So I think it's a vote of confidence, I don't think it changes a lot about how we operate on a day to day basis. >> Matt, thanks for coming on theCUBE. Again, congratulations. Seminal work, you guys are doing great. Lyft is really, I think, a great example of the new dynamic in open source where they're building and they're working with the community to continue to extend that. And this is what we want, that's what open source is all about. >> It is. >> Congratulations. And we got to have a graduation party for Envoy. We'll figure it out, get photos and pictures and everything else. Thanks for coming on theCUBE. >> Cool, thank you very much. >> theCUBE coverage here live, I'm John Furrier with Stu Miniman. More coverage after this short break, stay with us. (upbeat music)

Published Date : Dec 12 2018

SUMMARY :

Brought to you by Red Hat, Great to have you on This is the rising tide. and see all the cloud vendors getting the basic understanding of what it was and every KubeCon, I think and of course we talked to scale up with the community kind of acting the same way that the business is actually happening. is really kind of the magic. Just in the sense that we of one of the most difficult problems I still think we have a long way to go I think is going to be and the things that are It's a new kind of technical depth. of adopting the smallest to the mission. to figure out what their comfort level is. and target audiences of the environment. And I think what I would say is that I mean application assembly What does that mean to you, for the team? I don't think they invited me. and the community that Envoy of the new dynamic in open source where and everything else. I'm John Furrier with Stu Miniman.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt KleinPERSON

0.99+

MattPERSON

0.99+

John FurrierPERSON

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

MondayDATE

0.99+

yesterdayDATE

0.99+

three daysQUANTITY

0.99+

EnvoyORGANIZATION

0.99+

next yearDATE

0.99+

LyftORGANIZATION

0.99+

Cloud FoundationORGANIZATION

0.99+

two thingsQUANTITY

0.99+

a year agoDATE

0.99+

KubeConEVENT

0.99+

first timeQUANTITY

0.98+

last yearDATE

0.98+

bothQUANTITY

0.98+

each companyQUANTITY

0.98+

oneQUANTITY

0.98+

over two yearsQUANTITY

0.97+

EnvoyConEVENT

0.97+

EC2TITLE

0.97+

Day twoQUANTITY

0.97+

S3TITLE

0.97+

OneQUANTITY

0.97+

this weekDATE

0.94+

EdgeORGANIZATION

0.94+

CloudNativeConEVENT

0.94+

KubeConsEVENT

0.94+

almost 350 peopleQUANTITY

0.93+

one technologyQUANTITY

0.93+

Seattle, WashintonLOCATION

0.91+

CloudNativeCon North America 2018EVENT

0.9+

StuPERSON

0.89+

GitHubORGANIZATION

0.88+

KubeCon 2018EVENT

0.87+

firstQUANTITY

0.84+

IstioORGANIZATION

0.84+

theCUBEORGANIZATION

0.73+

EnvoyTITLE

0.7+

EnvoyConORGANIZATION

0.68+

Service MeshORGANIZATION

0.64+

KubernetesORGANIZATION

0.61+

KubernetesTITLE

0.57+

Cloud NativeORGANIZATION

0.55+

CloudORGANIZATION

0.54+

Raghu Raghuram, VMware | VMware Radio 2018


 

>> [Narrator] From San Francisco, it's theCUBE. Covering Radio 2018, brought to you by VMware. >> Hey, welcome back everyone. This is theCUBE's exclusive coverage of Radio 2018. We are in San Francisco for their VMware's Radio 2018. It's their R&D fiesta, party. As Steve Harris said, former CTO, it's like a sales kickoff for engineers. It's a great time, but it's also serious. A lot of real serious discussion and of course people are flexing their technical muscle and stretching their minds. And I'm here with one of the chief operator, one of the main principals and legend in VMware, Raghu Raghuram. Chief Operating Officer, new title. Chief Operating Officer, Products and Cloud Services. >> That's right. >> Great to see you. >> Great to see you, John. >> What year did you join VMware? >> 2003 (chuckling) >> 15 years >> So, you've seen many of these radios. >> Yes, it's one of the highlights of the year for me. >> Yeah, super important architect of VMware, great part of the community, leader, architect of the AWS relationship. >> [Raghu] Sure >> Part of that movement with Andy Jassy, Sanjay Poonen. This is the 14th year of radio and VMware has changed a lot since you joined. It's now a world class organization. Getting check marks for one of the best places to work. Certainly for engineers it's like a great party environment. Take a minute to explain the radio culture, it's 14th year, there's t-shirts behind us, to commemorate the key milestones, where it's come from, where it's gone, your thoughts on the program and the community. >> Yeah, I mean this is in fact one of the unique characteristics of VMware. I have checked around with my peers in the industry and I don't think any other tech company of our size does this. Radio stands for R&D innovation offsite. Like you said, we started fourteen years ago just to take a bunch of engineers out from their daily grinds and say, "what could we be building fundamentally that's groundbreaking?" So, I would say it's a cross between a wild science fair and a research conference. In fact, both of these go hand in hand at this place. People publish papers and there is a selection committee just like in serious conferences. In fact, Ray had some amazing stats for this year's submissions and the selection is very very rigorous. At the same time, you'll go upstairs and you'll see the exhibition hall where there are all kinds of things that are displayed. Things that could be very well incremental things in the next release and things that are wild and wacky off the wall that we might never ever do. So, it's really the full gamut. Another interesting thing is we've gone bigger. We are getting people from pretty much all parts of the Emirate. I think there is representation from 25 companies. >> [John] How many engineering centers are there roughly? I mean, there's core centers and then you have engineers all over the world. How many engineers, ballpark? >> I would say, in terms of medium to big size centers, there are probably over a dozen across the globe and literally every continent. Clearly, in the US we have four big centers. In Europe, we have three at least. In Asia, we have another three or four. So, we definitely have over 10. >> I mean everyone who knows the VMware and also knows theCUBE, for nine years, well this is our ninth year covering VMworld, all you gotta do is look at VMworld and you can tell one thing right out of the gate. Very community oriented. All the decisions are made in the community. Also, people who know VMware know you're highly an engineering organization. >> [Raghu] Yep. >> This is not like a lot of marketing fluff. Although, you do have some good marketing here and there, but the point is it's an engineering culture with community. This is unique. I've seen companies that don't walk the talk on "community engineering". They have silos, there's a lot of infighting. How have you- How has VMware preserved a culture of innovation amongst their peers when it's competitive as hell inside VMware? One to be smart, achieve the success. But, also, VMware has always been in always a moving market. How do you guys do it? What's the secret sauce? >> I mean, there's not a single thing. Like you said, culture is something that happens over time and is preserved over time and is preserved through people. It's not like anything you can write down, right? Of course you can write it down. But, it won't be worth the paper it's written down on unless it's practiced everyday by other people. And so, I think that is the key thing here. Right from the get go, customer centric innovation has ruled the rules here. So, the question to ask always is great innovation, look at it from a customer end point of view. I think that matters a lot here. Secondly, there is a lot of emphasis on breaking the rules in terms of doing something disruptive, right? And, the engineers that come here tend to be the kind to respond to that, right? And then lots of venues. Like this is not the only thing that we do, right? We do these things called borathons, which is our internal version of hackathons. We do regional versions of these things. Each of the teams, like the business units, have their own little R&D innovation activities that go on. >> They have a playground. They can basically go outside the scope of their job. >> Exactly. >> Get an idea, a passion, an idea and go after it and not have to worry about anything. >> Yup, exactly. >> [John] With a path to commercialization, if it hits. >> Yeah, that's what I was gonna say. We have a fairly high success rate, I would say, of taking things that we see here and turning them into product and eventually into monetizable businesses. All the things that go into the product features. >> Give some examples of historically, successes, notables, and then also talk about some ones that aren't notable that have come out. I know a lot has come out of this, the numbers are clear. What are some highlights that have come out of the radio event that have been blockbuster successes? >> A lot of the things that you see in the networking today came out of radio. Things about doing security and networking from the hypervisor up, came from here. What you see today as vSAN, had its roots here. What you see today with the app defense and the security stuff, had its roots here. A lot of the features that are in vSphere today, especially the storage vMotion and so on and so forth, was first showcased here. This goes on and on and on. We also have a lot of things that have shown up here that we have not pursued. For example, almost like an eBay for VM capacity. We didn't pursue it. God knows, that could've been a huge idea. (laughing) >> It's the misses too. >> Yeah, there's the misses too. But, that's the whole point of this. >> Yeah. There's parts to creativity. How much creativity goes on at this event? I mean there's certainly a lot of barnstorming, brainstorming, or whatever you wanna call it. A lot of interaction, physical face to face. How much creativity is happening you think here? >> Yeah, so a few years back they introduced a couple of things. One is a instant birds of a feather. Where you can literally go to a whiteboard and say, "hey let's discuss this topic," and set up a time and then people show up. There's this other one they call Lightning Rounds, which literally happens over drinks I think tomorrow or something. Where people come in and it's lots of the mini gauntlet where nothing is scripted. All sorts of crazy ideas keep flowing. I would say those are two examples where there's a lot of on the spot creativity. As a company, the R&D teams have gotten more dispersed. This is the opportunity for people to get together even within the same business unit or across business units and say let's go solve this problem. You and I have been talking about this on email, let's talk about it face to face. Hey, let's bring somebody else in that's relevant to this conversation as well. So, those are the kind of things that go on here that spark the creativity. And then of course, the exhibits. When people start thinking about these exhibits and talking to people that are showing there, other ideas get spawned off as well. >> Raghu, talk about just from your experience, you got a great track record, and certainly it was in VMware, it goes back to the early 2000s. What is your observation on the innovation formula? What's been the consistent constant of innovation? As the waves have changed- I mean, I've been in Palo Alto for 19 years now, in my 20th year. Even Palo Alto's changed. So, the world's changed, modern. And we'll get to the Amazon deal in a second. Certainly cloud's here. What have you seen as the constant innovation variable? >> What I would say is this. Fundamentally the people that we tend to recruit into VMware are by large what we call, or at least I call, platform thinkers. So, they think of building a fundamental piece of technology that can be possibly be used in 10 different ways, and they build it for one particular use case. And then, the questions goes back to, now we've done this, what else can we do with this foundational technology? If you look at vSphere, does the same thing. If you look at networking, same thing. Storage is the same thing. So, I would say that is the constant. That's one constant here. Which is, how do you build fundamentally a platform that could be used in very different ways. >> Some will also say systems thinking. >> Exactly, so that's a compliment. >> The cloud is a system. >> (mumbles) I think Paul Maritz is a 2010 picture. Although, some of the calls didn't come out. He kind of generally had the architecture. >> Yeah, yeah >> He nailed it (laughs) >> There are a few people like Paul in the world and absolutely he nailed it. >> Dave and I would give him a lot of credit for that. Okay, let's talk about Amazon Web Services. Certainly Radio's now 14th year. At what point did the cloud start clicking in? You said there's some misses, the eBay for VMs. Certainly cloud is on the radar. >> Yeah >> And vCloud, we know what happened there. Pat talked about how you guys really took that opportunity, which is, you made lemonade out of some lemons there with that product. That's my words, not his. When did cloud first appear on the horizon in Radio and how do you see that happening now as we talk multi-cloud? >> You missed the alumni session today. One of the early engineers said when he was interviewed by Mendel, which was in 1999, Mendel is of course the founder and first chief scientist here. He said he foresaw the event. When the engineer asked him, "how are we gonna make money on this?" He thought there would be a day when people just rent computer capacity from a data center instead of going out and buying gear. In some ways- >> He predicted >> He predicted >> Cloud operations >> Back in the company's starting days. But really I think we saw this in 2005, 2006, 2007. At the same time actually as Amazon saw this. But, the big difference was we were growing 100% a year on core business and we had our hands full that way. We felt like as a software company the way to play it was by delivering technology to other people to build it. So, that's when it really made it's way here, in Radio and in the products. >> And by the way, it wasn't obvious to many people in the industry at that time, to Amazon. I've had many conversations with Andy Chassy and he now uses the term being misunderstood. They were completely misunderstood unless you were an entrepreneur who was using EC-2 just to avoid seed money. 'Cause it was a dream for entrepreneur's at that time. I remember that clearly. That was not obvious. It really wasn't obvious until about 2010, nine, 10. So you guys were growing. Missed that. Radio is not about missing it. It's about identifying. >> Exactly. >> So, how does it translate today for Amazon? >> The Amazon relationship, if you think about the technical underpinnings of it, clearly we did a vCloud error. We learned a lot on that. Within some of our engineers, the question that was asked was, "what if we could run a cloud on top of other peoples clouds?" And we did experiments with nested virtualization. We did experiments with bare metal. And then we chose the start of our model. So, that's one of the technical early indicators of what we could do on other people's clouds. So, that's a big thing. The rest of the things we're doing with respect to elastically growing capacity and all those things, came from experiments that were shown up here. So, that was the connection back to Radio. In terms of the Amazon partnership itself, a lot of it was driven from the customer end. As we were thinking about VCN not working the way we wanted it to work, we went back to the customers and said, "what is wrong with this picture?" And, the answer that came back was very clear. They said, we like the hybrid idea, but we want the hybrid to be VMware on prem and Amazon in the cloud because 70% of our customers turned out to be AWS customers. And at the same time AWS was hearing the same thing. Why don't you guys team up instead of being either or? That's what led to the partnership. >> Your team at VMware came as the cloud native piece? >> Yeah >> Aspect of it. So Kubernetes is on the horizon. Not on the horizon, in your face. And you've got service mesh over the top. >> Yep, yep >> That's up the stack. It's networking. >> Yep, exactly. >> Still needs to do networking. >> Yeah, exactly. >> It's like, you guys must be like, hey we love what's going on up there. Come down to the store. >> Yeah. So, the boundary between what is application platform and infrastructure platform is constantly changing. Kubernetes, when it started out people said oh it's an application platform. Now it turns out its actually infrastructure. Same thing in networking. So what we see is, things were the lower level of the infrastructure constructs, the same idea is applied at the next level up. That's why we love Kubernetes. We love Service Mesh. We love similar concepts that are coming about in storage and security it's one- >> A unified stack is coming. >> Yep, exactly. >> Just someone fix networking and then the holy grail, programmable networks. >> Yep >> When are they coming? >> At the application level. >> Let's go >> Yeah >> Holy grail is finally here. It's not where you thought it was gonna be. >> It is at both places, right. I mean, it's tying back to the conventional layer, two layer, three stuff because that's also important still. >> Raghu, I love having a chat with you. It's great to chat. >> Good to see you again John. >> Super impressive with the work you've been doing. Love the cloud deal with Amazon, you know that. Love what's going on at Kubernetes and containerization. Love what's going on with Service Mesh, unified stack. Love cryptocurrency, which I didn't get to ask you. >> Yep >> Thumbs up? >> Crazy things going on there too >> Thumbs up, okay, thumbs up. >> We're watching the cryptocurrency. >> Watching, token economics coming right behind it. It's theCUBE bringing you all the action here at Radio. We're the signal. 2018, Radio 2018. I'm theCUBE with Raghu. I'll be right back with more coverage after this short break. (upbeat music)

Published Date : May 30 2018

SUMMARY :

Covering Radio 2018, brought to you by VMware. and of course people are flexing their the community, leader, architect of the AWS relationship. and the community. and the selection is very very rigorous. and then you have engineers all over the world. Clearly, in the US we have four big centers. All the decisions are made in the community. What's the secret sauce? So, the question to ask always They can basically go outside the scope of their job. and not have to worry about anything. All the things that go into the product features. of the radio event that have been blockbuster successes? A lot of the things that you see But, that's the whole point of this. A lot of interaction, physical face to face. This is the opportunity for people to get together So, the world's changed, modern. Fundamentally the people that we tend He kind of generally had the architecture. There are a few people like Paul in the world Certainly cloud is on the radar. When did cloud first appear on the horizon in Radio One of the early engineers said But, the big difference was we And by the way, it wasn't obvious and Amazon in the cloud because 70% So Kubernetes is on the horizon. It's networking. It's like, you guys must be like, of the infrastructure constructs, and then the holy grail, programmable networks. It's not where you thought it was gonna be. It is at both places, right. It's great to chat. Love the cloud deal with Amazon, We're the signal.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MendelPERSON

0.99+

Andy JassyPERSON

0.99+

Steve HarrisPERSON

0.99+

EuropeLOCATION

0.99+

AWSORGANIZATION

0.99+

Sanjay PoonenPERSON

0.99+

AmazonORGANIZATION

0.99+

1999DATE

0.99+

DavePERSON

0.99+

2010DATE

0.99+

2005DATE

0.99+

AsiaLOCATION

0.99+

Andy ChassyPERSON

0.99+

PaulPERSON

0.99+

2007DATE

0.99+

USLOCATION

0.99+

2006DATE

0.99+

JohnPERSON

0.99+

19 yearsQUANTITY

0.99+

fourQUANTITY

0.99+

VMwareORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

70%QUANTITY

0.99+

San FranciscoLOCATION

0.99+

Raghu RaghuramPERSON

0.99+

threeQUANTITY

0.99+

2003DATE

0.99+

25 companiesQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

nine yearsQUANTITY

0.99+

ninth yearQUANTITY

0.99+

RayPERSON

0.99+

14th yearQUANTITY

0.99+

two examplesQUANTITY

0.99+

todayDATE

0.99+

PatPERSON

0.99+

RaghuPERSON

0.99+

vSphereTITLE

0.99+

eBayORGANIZATION

0.99+

early 2000sDATE

0.99+

over a dozenQUANTITY

0.99+

OneQUANTITY

0.99+

15 yearsQUANTITY

0.99+

Paul MaritzPERSON

0.99+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

EachQUANTITY

0.98+

fourteen years agoDATE

0.98+

two layerQUANTITY

0.98+

firstQUANTITY

0.98+

tomorrowDATE

0.97+

SecondlyQUANTITY

0.97+

20th yearQUANTITY

0.97+

100% a yearQUANTITY

0.97+

both placesQUANTITY

0.97+

EC-2TITLE

0.97+

vMotionTITLE

0.97+

10 different waysQUANTITY

0.97+

EmirateLOCATION

0.96+

four big centersQUANTITY

0.96+

VCNORGANIZATION

0.96+

VMworldORGANIZATION

0.96+