Breaking Analysis: MWC 2023 highlights telco transformation & the future of business
>> From the Cube Studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> The world's leading telcos are trying to shed the stigma of being monopolies lacking innovation. Telcos have been great at operational efficiency and connectivity and living off of transmission, and the costs and expenses or revenue associated with that transmission. But in a world beyond telephone poles and basic wireless and mobile services, how will telcos modernize and become more agile and monetize new opportunities brought about by 5G and private wireless and a spate of new innovations and infrastructure, cloud data and apps? Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis and ahead of Mobile World Congress or now, MWC23, we explore the evolution of the telco business and how the industry is in many ways, mimicking transformations that took place decades ago in enterprise IT. We'll model some of the traditional enterprise vendors using ETR data and investigate how they're faring in the telecommunications sector, and we'll pose some of the key issues facing the industry this decade. First, let's take a look at what the GSMA has in store for MWC23. GSMA is the host of what used to be called Mobile World Congress. They've set the theme for this year's event as "Velocity" and they've rebranded MWC to reflect the fact that mobile technology is only one part of the story. MWC has become one of the world's premier events highlighting innovations not only in Telco, mobile and 5G, but the collision between cloud, infrastructure, apps, private networks, smart industries, machine intelligence, and AI, and more. MWC comprises an enormous ecosystem of service providers, technology companies, and firms from virtually every industry including sports and entertainment. And as well, GSMA, along with its venue partner at the Fira Barcelona, have placed a major emphasis on sustainability and public and private partnerships. Virtually every industry will be represented at the event because every industry is impacted by the trends and opportunities in this space. GSMA has said it expects 80,000 attendees at MWC this year, not quite back to 2019 levels, but trending in that direction. Of course, attendance from Chinese participants has historically been very high at the show, and obviously the continued travel issues from that region are affecting the overall attendance, but still very strong. And despite these concerns, Huawei, the giant Chinese technology company. has the largest physical presence of any exhibitor at the show. And finally, GSMA estimates that more than $300 million in economic benefit will result from the event which takes place at the end of February and early March. And The Cube will be back at MWC this year with a major presence thanks to our anchor sponsor, Dell Technologies and other supporters of our content program, including Enterprise Web, ArcaOS, VMware, Snowflake, Cisco, AWS, and others. And one of the areas we're interested in exploring is the evolution of the telco stack. It's a topic that's often talked about and one that we've observed taking place in the 1990s when the vertically integrated IBM mainframe monopoly gave way to a disintegrated and horizontal industry structure. And in many ways, the same thing is happening today in telecommunications, which is shown on the left-hand side of this diagram. Historically, telcos have relied on a hardened, integrated, and incredibly reliable, and secure set of hardware and software services that have been fully vetted and tested, and certified, and relied upon for decades. And at the top of that stack on the left are the crown jewels of the telco stack, the operational support systems and the business support systems. For the OSS, we're talking about things like network management, network operations, service delivery, quality of service, fulfillment assurance, and things like that. For the BSS systems, these refer to customer-facing elements of the stack, like revenue, order management, what products they sell, billing, and customer service. And what we're seeing is telcos have been really good at operational efficiency and making money off of transport and connectivity, but they've lacked the innovation in services and applications. They own the pipes and that works well, but others, be the over-the-top content companies, or private network providers and increasingly, cloud providers have been able to bypass the telcos, reach around them, if you will, and drive innovation. And so, the right-most diagram speaks to the need to disaggregate pieces of the stack. And while the similarities to the 1990s in enterprise IT are greater than the differences, there are things that are different. For example, the granularity of hardware infrastructure will not likely be as high where competition occurred back in the 90s at every layer of the value chain with very little infrastructure integration. That of course changed in the 2010s with converged infrastructure and hyper-converged and also software defined. So, that's one difference. And the advent of cloud, containers, microservices, and AI, none of that was really a major factor in the disintegration of legacy IT. And that probably means that disruptors can move even faster than did the likes of Intel and Microsoft, Oracle, Cisco, and the Seagates of the 1990s. As well, while many of the products and services will come from traditional enterprise IT names like Dell, HPE, Cisco, Red Hat, VMware, AWS, Microsoft, Google, et cetera, many of the names are going to be different and come from traditional network equipment providers. These are names like Ericsson and Huawei, and Nokia, and other names, like Wind River, and Rakuten, and Dish Networks. And there are enormous opportunities in data to help telecom companies and their competitors go beyond telemetry data into more advanced analytics and data monetization. There's also going to be an entirely new set of apps based on the workloads and use cases ranging from hospitals, sports arenas, race tracks, shipping ports, you name it. Virtually every vertical will participate in this transformation as the industry evolves its focus toward innovation, agility, and open ecosystems. Now remember, this is not a binary state. There are going to be greenfield companies disrupting the apple cart, but the incumbent telcos are going to have to continue to ensure newer systems work with their legacy infrastructure, in their OSS and BSS existing systems. And as we know, this is not going to be an overnight task. Integration is a difficult thing, transformations, migrations. So that's what makes this all so interesting because others can come in with Greenfield and potentially disrupt. There'll be interesting partnerships and ecosystems will form and coalitions will also form. Now, we mentioned that several traditional enterprise companies are or will be playing in this space. Now, ETR doesn't have a ton of data on specific telecom equipment and software providers, but it does have some interesting data that we cut for this breaking analysis. What we're showing here in this graphic is some of the names that we've followed over the years and how they're faring. Specifically, we did the cut within the telco sector. So the Y-axis here shows net score or spending velocity. And the horizontal axis, that shows the presence or pervasiveness in the data set. And that table insert in the upper left, that informs as to how the dots are plotted. You know, the two columns there, net score and the ends. And that red-dotted line, that horizontal line at 40%, that is an indicator of a highly elevated level. Anything above that, we consider quite outstanding. And what we'll do now is we'll comment on some of the cohorts and share with you how they're doing in telecommunications, and that sector, that vertical relative to their position overall in the data set. Let's start with the public cloud players. They're prominent in every industry. Telcos, telecommunications is no exception and it's quite an interesting cohort here. On the one hand, they can help telecommunication firms modernize and become more agile by eliminating the heavy lifting and you know, all the cloud, you know, value prop, data center costs, and the cloud benefits. At the same time, public cloud players are bringing their services to the edge, building out their own global networks and are a disruptive force to traditional telcos. All right, let's talk about Azure first. Their net score is basically identical to telco relative to its overall average. AWS's net score is higher in telco by just a few percentage points. Google Cloud platform is eight percentage points higher in telco with a 53% net score. So all three hyperscalers have an equal or stronger presence in telco than their average overall. Okay, let's look at the traditional enterprise hardware and software infrastructure cohort. Dell, Cisco, HPE, Red Hat, VMware, and Oracle. We've highlighted in this chart just as sort of indicators or proxies. Dell's net score's 10 percentage points higher in telco than its overall average. Interesting. Cisco's is a bit higher. HPE's is actually lower by about nine percentage points in the ETR survey, and VMware's is lower by about four percentage points. Now, Red Hat is really interesting. OpenStack, as we've previously reported is popular with telcos who want to build out their own private cloud. And the data shows that Red Hat OpenStack's net score is 15 percentage points higher in the telco sector than its overall average. OpenShift, on the other hand, has a net score that's four percentage points lower in telco than its overall average. So this to us talks to the pace of adoption of microservices and containers. You know, it's going to happen, but it's going to happen more slowly. Finally, Oracle's spending momentum is somewhat lower in the sector than its average, despite the firm having a decent telco business. IBM and Accenture, heavy services companies are both lower in this sector than their average. And real quickly, snowflake's net score is much lower by about 12 percentage points relative to its very high average net score of 62%. But we look for them to be a player in this space as telcos need to modernize their analytics stack and share data in a governed manner. Databricks' net score is also much lower than its average by about 13 points. And same, I would expect them to be a player as open architectures and cloud gains steam in telco. All right, let's close out now on what we're going to be talking about at MWC23 and some of the key issues that we'll be unpacking. We've talked about stack disaggregation in this breaking analysis, but the key here will be the pace at which it will reach the operational efficiency and reliability of closed stacks. Telcos, you know, in a large part, they're engineering heavy firms and much of their work takes place, kind of in the basement, in the dark. It's not really a big public hype machine, and they tend to move slowly and cautiously. While they understand the importance of agility, they're going to be careful because, you know, it's in their DNA. And so at the same time, if they don't move fast enough, they're going to get hurt and disrupted by competitors. So that's going to be a topic of conversation, and we'll be looking for proof points. And the other comment I'll make is around integration. Telcos because of their conservatism will benefit from better testing and those firms that can innovate on the testing front and have labs and certifications and innovate at that level, with an ecosystem are going to be in a better position. Because open sometimes means wild west. So the more players like Dell, HPE, Cisco, Red Hat, et cetera, that do that and align with their ecosystems and provide those resources, the faster adoption is going to go. So we'll be looking for, you know, who's actually doing that, Open RAN or Radio Access Networks. That fits in this discussion because O-RAN is an emerging network architecture. It essentially enables the use of open technologies from an ecosystem and over time, look at O-RAN is going to be open, but the questions, you know, a lot of questions remain as to when it will be able to deliver the operational efficiency of traditional RAN. Got some interesting dynamics going on. Rakuten is a company that's working hard on this problem, really focusing on operational efficiency. Then you got Dish Networks. They're also embracing O-RAN. They're coming at it more from service innovation. So that's something that we'll be monitoring and unpacking. We're going to look at cloud as a disruptor. On the one hand, cloud can help drive agility, as we said earlier and optionality, and innovation for incumbent telcos. But the flip side is going to also do the same for startups trying to disrupt and cloud attracts startups. While some of the telcos are actually embracing the cloud, many are being cautious. So that's going to be an interesting topic of discussion. And there's private wireless networks and 5G, and hyperlocal private networks, they're being deployed, you know, at the edge. This idea of open edge is also a really hot topic and this trend is going to accelerate. You know, the importance here is that the use cases are going to be widely varied. The needs of a hospital are going to be different than those of a sports venue are different from a remote drilling location, and energy or a concert venue. Things like real-time AI inference and data flows are going to bring new services and monetization opportunities. And many firms are going to be bypassing traditional telecommunications networks to build these out. Satellites as well, we're going to see, you know, in this decade, you're going to have, you're going to look down at Google Earth and you're going to see real-time. You know, today you see snapshots and so, lots of innovations going in that space. So how is this going to disrupt industries and traditional industry structures? Now, as always, we'll be looking at data angles, right? 'Cause it's in The Cube's DNA to follow the data and what opportunities and risks data brings. The Cube is going to be on location at MWC23 at the end of the month. We got a great set. We're in the walkway between halls four and five, right in Congress Square, it's booths CS60. So we'll have a full, they're called Stan CS60. We have a full schedule. I'm going to be there with Lisa Martin, Dave Nicholson and the entire Cube crew, so don't forget to stop by. All right, that's a wrap. I want to thank Alex Myerson, who's on production and manages the podcast, Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at Silicon Angle, does some great stuff for us. Thank you all. Remember, all these episodes are available as podcasts. Wherever you listen, just search "Breaking Analysis" podcasts I publish each week on wikibon.com and silicon angle.com. And all the video content is available on demand at thecube.net. You can email me directly at david.vellante@silicon angle.com. You can DM me at dvellante or comment on my LinkedIn post. Please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for The Cube Insights powered by ETR. Thanks for watching and we'll see you at Mobile World Congress, and/or at next time on "Breaking Analysis." (bright music) (bright music fades)
SUMMARY :
From the Cube Studios and some of the key issues
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2019 | DATE | 0.99+ |
53% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
more than $300 million | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Congress Square | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Dish Networks | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
2010s | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
david.vellante@silicon angle.com | OTHER | 0.99+ |
MWC23 | EVENT | 0.99+ |
1990s | DATE | 0.99+ |
62% | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
two columns | QUANTITY | 0.99+ |
each week | QUANTITY | 0.99+ |
Seagates | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
early March | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
thecube.net | OTHER | 0.99+ |
MWC | EVENT | 0.99+ |
ETR | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
Cube Studios | ORGANIZATION | 0.98+ |
one part | QUANTITY | 0.98+ |
Chinese | OTHER | 0.98+ |
Boston | LOCATION | 0.98+ |
decades ago | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
90s | DATE | 0.97+ |
about 13 points | QUANTITY | 0.97+ |
Breaking Analysis: Supercloud2 Explores Cloud Practitioner Realities & the Future of Data Apps
>> Narrator: From theCUBE Studios in Palo Alto and Boston bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante >> Enterprise tech practitioners, like most of us they want to make their lives easier so they can focus on delivering more value to their businesses. And to do so, they want to tap best of breed services in the public cloud, but at the same time connect their on-prem intellectual property to emerging applications which drive top line revenue and bottom line profits. But creating a consistent experience across clouds and on-prem estates has been an elusive capability for most organizations, forcing trade-offs and injecting friction into the system. The need to create seamless experiences is clear and the technology industry is starting to respond with platforms, architectures, and visions of what we've called the Supercloud. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis we give you a preview of Supercloud 2, the second event of its kind that we've had on the topic. Yes, folks that's right Supercloud 2 is here. As of this recording, it's just about four days away 33 guests, 21 sessions, combining live discussions and fireside chats from theCUBE's Palo Alto Studio with prerecorded conversations on the future of cloud and data. You can register for free at supercloud.world. And we are super excited about the Supercloud 2 lineup of guests whereas Supercloud 22 in August, was all about refining the definition of Supercloud testing its technical feasibility and understanding various deployment models. Supercloud 2 features practitioners, technologists and analysts discussing what customers need with real-world examples of Supercloud and will expose thinking around a new breed of cross-cloud apps, data apps, if you will that change the way machines and humans interact with each other. Now the example we'd use if you think about applications today, say a CRM system, sales reps, what are they doing? They're entering data into opportunities they're choosing products they're importing contacts, et cetera. And sure the machine can then take all that data and spit out a forecast by rep, by region, by product, et cetera. But today's applications are largely about filling in forms and or codifying processes. In the future, the Supercloud community sees a new breed of applications emerging where data resides on different clouds, in different data storages, databases, Lakehouse, et cetera. And the machine uses AI to inspect the e-commerce system the inventory data, supply chain information and other systems, and puts together a plan without any human intervention whatsoever. Think about a system that orchestrates people, places and things like an Uber for business. So at Supercloud 2, you'll hear about this vision along with some of today's challenges facing practitioners. Zhamak Dehghani, the founder of Data Mesh is a headliner. Kit Colbert also is headlining. He laid out at the first Supercloud an initial architecture for what that's going to look like. That was last August. And he's going to present his most current thinking on the topic. Veronika Durgin of Sachs will be featured and talk about data sharing across clouds and you know what she needs in the future. One of the main highlights of Supercloud 2 is a dive into Walmart's Supercloud. Other featured practitioners include Western Union Ionis Pharmaceuticals, Warner Media. We've got deep, deep technology dives with folks like Bob Muglia, David Flynn Tristan Handy of DBT Labs, Nir Zuk, the founder of Palo Alto Networks focused on security. Thomas Hazel, who's going to talk about a new type of database for Supercloud. It's several analysts including Keith Townsend Maribel Lopez, George Gilbert, Sanjeev Mohan and so many more guests, we don't have time to list them all. They're all up on supercloud.world with a full agenda, so you can check that out. Now let's take a look at some of the things that we're exploring in more detail starting with the Walmart Cloud native platform, they call it WCNP. We definitely see this as a Supercloud and we dig into it with Jack Greenfield. He's the head of architecture at Walmart. Here's a quote from Jack. "WCNP is an implementation of Kubernetes for the Walmart ecosystem. We've taken Kubernetes off the shelf as open source." By the way, they do the same thing with OpenStack. "And we have integrated it with a number of foundational services that provide other aspects of our computational environment. Kubernetes off the shelf doesn't do everything." And so what Walmart chose to do, they took a do-it-yourself approach to build a Supercloud for a variety of reasons that Jack will explain, along with Walmart's so-called triplet architecture connecting on-prem, Azure and GCP. No surprise, there's no Amazon at Walmart for obvious reasons. And what they do is they create a common experience for devs across clouds. Jack is going to talk about how Walmart is evolving its Supercloud in the future. You don't want to miss that. Now, next, let's take a look at how Veronica Durgin of SAKS thinks about data sharing across clouds. Data sharing we think is a potential killer use case for Supercloud. In fact, let's hear it in Veronica's own words. Please play the clip. >> How do we talk to each other? And more importantly, how do we data share? You know, I work with data, you know this is what I do. So if you know I want to get data from a company that's using, say Google, how do we share it in a smooth way where it doesn't have to be this crazy I don't know, SFTP file moving? So that's where I think Supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> Now data mesh is a possible architectural approach that will enable more facile data sharing and the monetization of data products. You'll hear Zhamak Dehghani live in studio talking about what standards are missing to make this vision a reality across the Supercloud. Now one of the other things that we're really excited about is digging deeper into the right approach for Supercloud adoption. And we're going to share a preview of a debate that's going on right now in the community. Bob Muglia, former CEO of Snowflake and Microsoft Exec was kind enough to spend some time looking at the community's supercloud definition and he felt that it needed to be simplified. So in near real time he came up with the following definition that we're showing here. I'll read it. "A Supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." So not only did Bob simplify the initial definition he's stressed that the Supercloud is a platform versus an architecture implying that the platform provider eg Snowflake, VMware, Databricks, Cohesity, et cetera is responsible for determining the architecture. Now interestingly in the shared Google doc that the working group uses to collaborate on the supercloud de definition, Dr. Nelu Mihai who is actually building a Supercloud responded as follows to Bob's assertion "We need to avoid creating many Supercloud platforms with their own architectures. If we do that, then we create other proprietary clouds on top of existing ones. We need to define an architecture of how Supercloud interfaces with all other clouds. What is the information model? What is the execution model and how users will interact with Supercloud?" What does this seemingly nuanced point tell us and why does it matter? Well, history suggests that de facto standards will emerge more quickly to resolve real world practitioner problems and catch on more quickly than consensus-based architectures and standards-based architectures. But in the long run, the ladder may serve customers better. So we'll be exploring this topic in more detail in Supercloud 2, and of course we'd love to hear what you think platform, architecture, both? Now one of the real technical gurus that we'll have in studio at Supercloud two is David Flynn. He's one of the people behind the the movement that enabled enterprise flash adoption, that craze. And he did that with Fusion IO and he is now working on a system to enable read write data access to any user in any application in any data center or on any cloud anywhere. So think of this company as a Supercloud enabler. Allow me to share an excerpt from a conversation David Flore and I had with David Flynn last year. He as well gave a lot of thought to the Supercloud definition and was really helpful with an opinionated point of view. He said something to us that was, we thought relevant. "What is the operating system for a decentralized cloud? The main two functions of an operating system or an operating environment are one the process scheduler and two, the file system. The strongest argument for supercloud is made when you go down to the platform layer and talk about it as an operating environment on which you can run all forms of applications." So a couple of implications here that will be exploring with David Flynn in studio. First we're inferring from his comment that he's in the platform camp where the platform owner is responsible for the architecture and there are obviously trade-offs there and benefits but we'll have to clarify that with him. And second, he's basically saying, you kill the concept the further you move up the stack. So the weak, the further you move the stack the weaker the supercloud argument becomes because it's just becoming SaaS. Now this is something we're going to explore to better understand is thinking on this, but also whether the existing notion of SaaS is changing and whether or not a new breed of Supercloud apps will emerge. Which brings us to this really interesting fellow that George Gilbert and I RIFed with ahead of Supercloud two. Tristan Handy, he's the founder and CEO of DBT Labs and he has a highly opinionated and technical mind. Here's what he said, "One of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse inside of your data lake. These are core concepts that the business should be able to create applications around very easily. In fact, that's not the case because it involves a lot of data engineering pipeline and other work to make these available. So if you really want to make it easy to create these data experiences for users you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes and they don't need to." A lot of implications to this statement that will explore at Supercloud two versus Jamma Dani's data mesh comes into play here with her critique of hyper specialized data pipeline experts with little or no domain knowledge. Also the need for simplified self-service infrastructure which Kit Colbert is likely going to touch upon. Veronica Durgin of SAKS and her ideal state for data shearing along with Harveer Singh of Western Union. They got to deal with 200 locations around the world in data privacy issues, data sovereignty how do you share data safely? Same with Nick Taylor of Ionis Pharmaceutical. And not to blow your mind but Thomas Hazel and Bob Muglia deposit that to make data apps a reality across the Supercloud you have to rethink everything. You can't just let in memory databases and caching architectures take care of everything in a brute force manner. Rather you have to get down to really detailed levels even things like how data is laid out on disk, ie flash and think about rewriting applications for the Supercloud and the MLAI era. All of this and more at Supercloud two which wouldn't be complete without some data. So we pinged our friends from ETR Eric Bradley and Darren Bramberm to see if they had any data on Supercloud that we could tap. And so we're going to be analyzing a number of the players as well at Supercloud two. Now, many of you are familiar with this graphic here we show some of the players involved in delivering or enabling Supercloud-like capabilities. On the Y axis is spending momentum and on the horizontal accesses market presence or pervasiveness in the data. So netscore versus what they call overlap or end in the data. And the table insert shows how the dots are plotted now not to steal ETR's thunder but the first point is you really can't have supercloud without the hyperscale cloud platforms which is shown on this graphic. But the exciting aspect of Supercloud is the opportunity to build value on top of that hyperscale infrastructure. Snowflake here continues to show strong spending velocity as those Databricks, Hashi, Rubrik. VMware Tanzu, which we all put under the magnifying glass after the Broadcom announcements, is also showing momentum. Unfortunately due to a scheduling conflict we weren't able to get Red Hat on the program but they're clearly a player here. And we've put Cohesity and Veeam on the chart as well because backup is a likely use case across clouds and on-premises. And now one other call out that we drill down on at Supercloud two is CloudFlare, which actually uses the term supercloud maybe in a different way. They look at Supercloud really as you know, serverless on steroids. And so the data brains at ETR will have more to say on this topic at Supercloud two along with many others. Okay, so why should you attend Supercloud two? What's in it for me kind of thing? So first of all, if you're a practitioner and you want to understand what the possibilities are for doing cross-cloud services for monetizing data how your peers are doing data sharing, how some of your peers are actually building out a Supercloud you're going to get real world input from practitioners. If you're a technologist, you're trying to figure out various ways to solve problems around data, data sharing, cross-cloud service deployment there's going to be a number of deep technology experts that are going to share how they're doing it. We're also going to drill down with Walmart into a practical example of Supercloud with some other examples of how practitioners are dealing with cross-cloud complexity. Some of them, by the way, are kind of thrown up their hands and saying, Hey, we're going mono cloud. And we'll talk about the potential implications and dangers and risks of doing that. And also some of the benefits. You know, there's a question, right? Is Supercloud the same wine new bottle or is it truly something different that can drive substantive business value? So look, go to Supercloud.world it's January 17th at 9:00 AM Pacific. You can register for free and participate directly in the program. Okay, that's a wrap. I want to give a shout out to the Supercloud supporters. VMware has been a great partner as our anchor sponsor Chaos Search Proximo, and Alura as well. For contributing to the effort I want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman is his supporting cast as well. Kristen Martin and Cheryl Knight to help get the word out on social media and at our newsletters. And Rob Ho is our editor-in-chief over at Silicon Angle. Thank you all. Remember, these episodes are all available as podcast. Wherever you listen we really appreciate the support that you've given. We just saw some stats from from Buzz Sprout, we hit the top 25% we're almost at 400,000 downloads last year. So really appreciate your participation. All you got to do is search Breaking Analysis podcast and you'll find those I publish each week on wikibon.com and siliconangle.com. Or if you want to get ahold of me you can email me directly at David.Vellante@siliconangle.com or dm me DVellante or comment on our LinkedIn post. I want you to check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Supercloud two or next time on breaking analysis. (light music)
SUMMARY :
with Dave Vellante of the things that we're So if you know I want to get data and on the horizontal
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bob Muglia | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
David Flynn | PERSON | 0.99+ |
Veronica | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
Nelu Mihai | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
Nick Taylor | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jack Greenfield | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Veronica Durgin | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Rob Ho | PERSON | 0.99+ |
Warner Media | ORGANIZATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Veronika Durgin | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Ionis Pharmaceutical | ORGANIZATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
David Flore | PERSON | 0.99+ |
DBT Labs | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Bob | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
21 sessions | QUANTITY | 0.99+ |
Darren Bramberm | PERSON | 0.99+ |
33 guests | QUANTITY | 0.99+ |
Nir Zuk | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Harveer Singh | PERSON | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Supercloud 2 | TITLE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Western Union | ORGANIZATION | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
200 locations | QUANTITY | 0.99+ |
August | DATE | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Data Mesh | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
David.Vellante@siliconangle.com | OTHER | 0.99+ |
next week | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.98+ |
Silicon Angle | ORGANIZATION | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
Eric Bradley | PERSON | 0.98+ |
two | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Sachs | ORGANIZATION | 0.98+ |
SAKS | ORGANIZATION | 0.98+ |
Supercloud | EVENT | 0.98+ |
last August | DATE | 0.98+ |
each week | QUANTITY | 0.98+ |
Analyst Predictions 2023: The Future of Data Management
(upbeat music) >> Hello, this is Dave Valente with theCUBE, and one of the most gratifying aspects of my role as a host of "theCUBE TV" is I get to cover a wide range of topics. And quite often, we're able to bring to our program a level of expertise that allows us to more deeply explore and unpack some of the topics that we cover throughout the year. And one of our favorite topics, of course, is data. Now, in 2021, after being in isolation for the better part of two years, a group of industry analysts met up at AWS re:Invent and started a collaboration to look at the trends in data and predict what some likely outcomes will be for the coming year. And it resulted in a very popular session that we had last year focused on the future of data management. And I'm very excited and pleased to tell you that the 2023 edition of that predictions episode is back, and with me are five outstanding market analyst, Sanjeev Mohan of SanjMo, Tony Baer of dbInsight, Carl Olofson from IDC, Dave Menninger from Ventana Research, and Doug Henschen, VP and Principal Analyst at Constellation Research. Now, what is it that we're calling you, guys? A data pack like the rat pack? No, no, no, no, that's not it. It's the data crowd, the data crowd, and the crowd includes some of the best minds in the data analyst community. They'll discuss how data management is evolving and what listeners should prepare for in 2023. Guys, welcome back. Great to see you. >> Good to be here. >> Thank you. >> Thanks, Dave. (Tony and Dave faintly speaks) >> All right, before we get into 2023 predictions, we thought it'd be good to do a look back at how we did in 2022 and give a transparent assessment of those predictions. So, let's get right into it. We're going to bring these up here, the predictions from 2022, they're color-coded red, yellow, and green to signify the degree of accuracy. And I'm pleased to report there's no red. Well, maybe some of you will want to debate that grading system. But as always, we want to be open, so you can decide for yourselves. So, we're going to ask each analyst to review their 2022 prediction and explain their rating and what evidence they have that led them to their conclusion. So, Sanjeev, please kick it off. Your prediction was data governance becomes key. I know that's going to knock you guys over, but elaborate, because you had more detail when you double click on that. >> Yeah, absolutely. Thank you so much, Dave, for having us on the show today. And we self-graded ourselves. I could have very easily made my prediction from last year green, but I mentioned why I left it as yellow. I totally fully believe that data governance was in a renaissance in 2022. And why do I say that? You have to look no further than AWS launching its own data catalog called DataZone. Before that, mid-year, we saw Unity Catalog from Databricks went GA. So, overall, I saw there was tremendous movement. When you see these big players launching a new data catalog, you know that they want to be in this space. And this space is highly critical to everything that I feel we will talk about in today's call. Also, if you look at established players, I spoke at Collibra's conference, data.world, work closely with Alation, Informatica, a bunch of other companies, they all added tremendous new capabilities. So, it did become key. The reason I left it as yellow is because I had made a prediction that Collibra would go IPO, and it did not. And I don't think anyone is going IPO right now. The market is really, really down, the funding in VC IPO market. But other than that, data governance had a banner year in 2022. >> Yeah. Well, thank you for that. And of course, you saw data clean rooms being announced at AWS re:Invent, so more evidence. And I like how the fact that you included in your predictions some things that were binary, so you dinged yourself there. So, good job. Okay, Tony Baer, you're up next. Data mesh hits reality check. As you see here, you've given yourself a bright green thumbs up. (Tony laughing) Okay. Let's hear why you feel that was the case. What do you mean by reality check? >> Okay. Thanks, Dave, for having us back again. This is something I just wrote and just tried to get away from, and this just a topic just won't go away. I did speak with a number of folks, early adopters and non-adopters during the year. And I did find that basically that it pretty much validated what I was expecting, which was that there was a lot more, this has now become a front burner issue. And if I had any doubt in my mind, the evidence I would point to is what was originally intended to be a throwaway post on LinkedIn, which I just quickly scribbled down the night before leaving for re:Invent. I was packing at the time, and for some reason, I was doing Google search on data mesh. And I happened to have tripped across this ridiculous article, I will not say where, because it doesn't deserve any publicity, about the eight (Dave laughing) best data mesh software companies of 2022. (Tony laughing) One of my predictions was that you'd see data mesh washing. And I just quickly just hopped on that maybe three sentences and wrote it at about a couple minutes saying this is hogwash, essentially. (laughs) And that just reun... And then, I left for re:Invent. And the next night, when I got into my Vegas hotel room, I clicked on my computer. I saw a 15,000 hits on that post, which was the most hits of any single post I put all year. And the responses were wildly pro and con. So, it pretty much validates my expectation in that data mesh really did hit a lot more scrutiny over this past year. >> Yeah, thank you for that. I remember that article. I remember rolling my eyes when I saw it, and then I recently, (Tony laughing) I talked to Walmart and they actually invoked Martin Fowler and they said that they're working through their data mesh. So, it takes a really lot of thought, and it really, as we've talked about, is really as much an organizational construct. You're not buying data mesh >> Bingo. >> to your point. Okay. Thank you, Tony. Carl Olofson, here we go. You've graded yourself a yellow in the prediction of graph databases. Take off. Please elaborate. >> Yeah, sure. So, I realized in looking at the prediction that it seemed to imply that graph databases could be a major factor in the data world in 2022, which obviously didn't become the case. It was an error on my part in that I should have said it in the right context. It's really a three to five-year time period that graph databases will really become significant, because they still need accepted methodologies that can be applied in a business context as well as proper tools in order for people to be able to use them seriously. But I stand by the idea that it is taking off, because for one thing, Neo4j, which is the leading independent graph database provider, had a very good year. And also, we're seeing interesting developments in terms of things like AWS with Neptune and with Oracle providing graph support in Oracle database this past year. Those things are, as I said, growing gradually. There are other companies like TigerGraph and so forth, that deserve watching as well. But as far as becoming mainstream, it's going to be a few years before we get all the elements together to make that happen. Like any new technology, you have to create an environment in which ordinary people without a whole ton of technical training can actually apply the technology to solve business problems. >> Yeah, thank you for that. These specialized databases, graph databases, time series databases, you see them embedded into mainstream data platforms, but there's a place for these specialized databases, I would suspect we're going to see new types of databases emerge with all this cloud sprawl that we have and maybe to the edge. >> Well, part of it is that it's not as specialized as you might think it. You can apply graphs to great many workloads and use cases. It's just that people have yet to fully explore and discover what those are. >> Yeah. >> And so, it's going to be a process. (laughs) >> All right, Dave Menninger, streaming data permeates the landscape. You gave yourself a yellow. Why? >> Well, I couldn't think of a appropriate combination of yellow and green. Maybe I should have used chartreuse, (Dave laughing) but I was probably a little hard on myself making it yellow. This is another type of specialized data processing like Carl was talking about graph databases is a stream processing, and nearly every data platform offers streaming capabilities now. Often, it's based on Kafka. If you look at Confluent, their revenues have grown at more than 50%, continue to grow at more than 50% a year. They're expected to do more than half a billion dollars in revenue this year. But the thing that hasn't happened yet, and to be honest, they didn't necessarily expect it to happen in one year, is that streaming hasn't become the default way in which we deal with data. It's still a sidecar to data at rest. And I do expect that we'll continue to see streaming become more and more mainstream. I do expect perhaps in the five-year timeframe that we will first deal with data as streaming and then at rest, but the worlds are starting to merge. And we even see some vendors bringing products to market, such as K2View, Hazelcast, and RisingWave Labs. So, in addition to all those core data platform vendors adding these capabilities, there are new vendors approaching this market as well. >> I like the tough grading system, and it's not trivial. And when you talk to practitioners doing this stuff, there's still some complications in the data pipeline. And so, but I think, you're right, it probably was a yellow plus. Doug Henschen, data lakehouses will emerge as dominant. When you talk to people about lakehouses, practitioners, they all use that term. They certainly use the term data lake, but now, they're using lakehouse more and more. What's your thoughts on here? Why the green? What's your evidence there? >> Well, I think, I was accurate. I spoke about it specifically as something that vendors would be pursuing. And we saw yet more lakehouse advocacy in 2022. Google introduced its BigLake service alongside BigQuery. Salesforce introduced Genie, which is really a lakehouse architecture. And it was a safe prediction to say vendors are going to be pursuing this in that AWS, Cloudera, Databricks, Microsoft, Oracle, SAP, Salesforce now, IBM, all advocate this idea of a single platform for all of your data. Now, the trend was also supported in 2023, in that we saw a big embrace of Apache Iceberg in 2022. That's a structured table format. It's used with these lakehouse platforms. It's open, so it ensures portability and it also ensures performance. And that's a structured table that helps with the warehouse side performance. But among those announcements, Snowflake, Google, Cloud Era, SAP, Salesforce, IBM, all embraced Iceberg. But keep in mind, again, I'm talking about this as something that vendors are pursuing as their approach. So, they're advocating end users. It's very cutting edge. I'd say the top, leading edge, 5% of of companies have really embraced the lakehouse. I think, we're now seeing the fast followers, the next 20 to 25% of firms embracing this idea and embracing a lakehouse architecture. I recall Christian Kleinerman at the big Snowflake event last summer, making the announcement about Iceberg, and he asked for a show of hands for any of you in the audience at the keynote, have you heard of Iceberg? And just a smattering of hands went up. So, the vendors are ahead of the curve. They're pushing this trend, and we're now seeing a little bit more mainstream uptake. >> Good. Doug, I was there. It was you, me, and I think, two other hands were up. That was just humorous. (Doug laughing) All right, well, so I liked the fact that we had some yellow and some green. When you think about these things, there's the prediction itself. Did it come true or not? There are the sub predictions that you guys make, and of course, the degree of difficulty. So, thank you for that open assessment. All right, let's get into the 2023 predictions. Let's bring up the predictions. Sanjeev, you're going first. You've got a prediction around unified metadata. What's the prediction, please? >> So, my prediction is that metadata space is currently a mess. It needs to get unified. There are too many use cases of metadata, which are being addressed by disparate systems. For example, data quality has become really big in the last couple of years, data observability, the whole catalog space is actually, people don't like to use the word data catalog anymore, because data catalog sounds like it's a catalog, a museum, if you may, of metadata that you go and admire. So, what I'm saying is that in 2023, we will see that metadata will become the driving force behind things like data ops, things like orchestration of tasks using metadata, not rules. Not saying that if this fails, then do this, if this succeeds, go do that. But it's like getting to the metadata level, and then making a decision as to what to orchestrate, what to automate, how to do data quality check, data observability. So, this space is starting to gel, and I see there'll be more maturation in the metadata space. Even security privacy, some of these topics, which are handled separately. And I'm just talking about data security and data privacy. I'm not talking about infrastructure security. These also need to merge into a unified metadata management piece with some knowledge graph, semantic layer on top, so you can do analytics on it. So, it's no longer something that sits on the side, it's limited in its scope. It is actually the very engine, the very glue that is going to connect data producers and consumers. >> Great. Thank you for that. Doug. Doug Henschen, any thoughts on what Sanjeev just said? Do you agree? Do you disagree? >> Well, I agree with many aspects of what he says. I think, there's a huge opportunity for consolidation and streamlining of these as aspects of governance. Last year, Sanjeev, you said something like, we'll see more people using catalogs than BI. And I have to disagree. I don't think this is a category that's headed for mainstream adoption. It's a behind the scenes activity for the wonky few, or better yet, companies want machine learning and automation to take care of these messy details. We've seen these waves of management technologies, some of the latest data observability, customer data platform, but they failed to sweep away all the earlier investments in data quality and master data management. So, yes, I hope the latest tech offers, glimmers that there's going to be a better, cleaner way of addressing these things. But to my mind, the business leaders, including the CIO, only want to spend as much time and effort and money and resources on these sorts of things to avoid getting breached, ending up in headlines, getting fired or going to jail. So, vendors bring on the ML and AI smarts and the automation of these sorts of activities. >> So, if I may say something, the reason why we have this dichotomy between data catalog and the BI vendors is because data catalogs are very soon, not going to be standalone products, in my opinion. They're going to get embedded. So, when you use a BI tool, you'll actually use the catalog to find out what is it that you want to do, whether you are looking for data or you're looking for an existing dashboard. So, the catalog becomes embedded into the BI tool. >> Hey, Dave Menninger, sometimes you have some data in your back pocket. Do you have any stats (chuckles) on this topic? >> No, I'm glad you asked, because I'm going to... Now, data catalogs are something that's interesting. Sanjeev made a statement that data catalogs are falling out of favor. I don't care what you call them. They're valuable to organizations. Our research shows that organizations that have adequate data catalog technologies are three times more likely to express satisfaction with their analytics for just the reasons that Sanjeev was talking about. You can find what you want, you know you're getting the right information, you know whether or not it's trusted. So, those are good things. So, we expect to see the capabilities, whether it's embedded or separate. We expect to see those capabilities continue to permeate the market. >> And a lot of those catalogs are driven now by machine learning and things. So, they're learning from those patterns of usage by people when people use the data. (airy laughs) >> All right. Okay. Thank you, guys. All right. Let's move on to the next one. Tony Bear, let's bring up the predictions. You got something in here about the modern data stack. We need to rethink it. Is the modern data stack getting long at the tooth? Is it not so modern anymore? >> I think, in a way, it's got almost too modern. It's gotten too, I don't know if it's being long in the tooth, but it is getting long. The modern data stack, it's traditionally been defined as basically you have the data platform, which would be the operational database and the data warehouse. And in between, you have all the tools that are necessary to essentially get that data from the operational realm or the streaming realm for that matter into basically the data warehouse, or as we might be seeing more and more, the data lakehouse. And I think, what's important here is that, or I think, we have seen a lot of progress, and this would be in the cloud, is with the SaaS services. And especially you see that in the modern data stack, which is like all these players, not just the MongoDBs or the Oracles or the Amazons have their database platforms. You see they have the Informatica's, and all the other players there in Fivetrans have their own SaaS services. And within those SaaS services, you get a certain degree of simplicity, which is it takes all the housekeeping off the shoulders of the customers. That's a good thing. The problem is that what we're getting to unfortunately is what I would call lots of islands of simplicity, which means that it leads it (Dave laughing) to the customer to have to integrate or put all that stuff together. It's a complex tool chain. And so, what we really need to think about here, we have too many pieces. And going back to the discussion of catalogs, it's like we have so many catalogs out there, which one do we use? 'Cause chances are of most organizations do not rely on a single catalog at this point. What I'm calling on all the data providers or all the SaaS service providers, is to literally get it together and essentially make this modern data stack less of a stack, make it more of a blending of an end-to-end solution. And that can come in a number of different ways. Part of it is that we're data platform providers have been adding services that are adjacent. And there's some very good examples of this. We've seen progress over the past year or so. For instance, MongoDB integrating search. It's a very common, I guess, sort of tool that basically, that the applications that are developed on MongoDB use, so MongoDB then built it into the database rather than requiring an extra elastic search or open search stack. Amazon just... AWS just did the zero-ETL, which is a first step towards simplifying the process from going from Aurora to Redshift. You've seen same thing with Google, BigQuery integrating basically streaming pipelines. And you're seeing also a lot of movement in database machine learning. So, there's some good moves in this direction. I expect to see more than this year. Part of it's from basically the SaaS platform is adding some functionality. But I also see more importantly, because you're never going to get... This is like asking your data team and your developers, herding cats to standardizing the same tool. In most organizations, that is not going to happen. So, take a look at the most popular combinations of tools and start to come up with some pre-built integrations and pre-built orchestrations, and offer some promotional pricing, maybe not quite two for, but in other words, get two products for the price of two services or for the price of one and a half. I see a lot of potential for this. And it's to me, if the class was to simplify things, this is the next logical step and I expect to see more of this here. >> Yeah, and you see in Oracle, MySQL heat wave, yet another example of eliminating that ETL. Carl Olofson, today, if you think about the data stack and the application stack, they're largely separate. Do you have any thoughts on how that's going to play out? Does that play into this prediction? What do you think? >> Well, I think, that the... I really like Tony's phrase, islands of simplification. It really says (Tony chuckles) what's going on here, which is that all these different vendors you ask about, about how these stacks work. All these different vendors have their own stack vision. And you can... One application group is going to use one, and another application group is going to use another. And some people will say, let's go to, like you go to a Informatica conference and they say, we should be the center of your universe, but you can't connect everything in your universe to Informatica, so you need to use other things. So, the challenge is how do we make those things work together? As Tony has said, and I totally agree, we're never going to get to the point where people standardize on one organizing system. So, the alternative is to have metadata that can be shared amongst those systems and protocols that allow those systems to coordinate their operations. This is standard stuff. It's not easy. But the motive for the vendors is that they can become more active critical players in the enterprise. And of course, the motive for the customer is that things will run better and more completely. So, I've been looking at this in terms of two kinds of metadata. One is the meaning metadata, which says what data can be put together. The other is the operational metadata, which says basically where did it come from? Who created it? What's its current state? What's the security level? Et cetera, et cetera, et cetera. The good news is the operational stuff can actually be done automatically, whereas the meaning stuff requires some human intervention. And as we've already heard from, was it Doug, I think, people are disinclined to put a lot of definition into meaning metadata. So, that may be the harder one, but coordination is key. This problem has been with us forever, but with the addition of new data sources, with streaming data with data in different formats, the whole thing has, it's been like what a customer of mine used to say, "I understand your product can make my system run faster, but right now I just feel I'm putting my problems on roller skates. (chuckles) I don't need that to accelerate what's already not working." >> Excellent. Okay, Carl, let's stay with you. I remember in the early days of the big data movement, Hadoop movement, NoSQL was the big thing. And I remember Amr Awadallah said to us in theCUBE that SQL is the killer app for big data. So, your prediction here, if we bring that up is SQL is back. Please elaborate. >> Yeah. So, of course, some people would say, well, it never left. Actually, that's probably closer to true, but in the perception of the marketplace, there's been all this noise about alternative ways of storing, retrieving data, whether it's in key value stores or document databases and so forth. We're getting a lot of messaging that for a while had persuaded people that, oh, we're not going to do analytics in SQL anymore. We're going to use Spark for everything, except that only a handful of people know how to use Spark. Oh, well, that's a problem. Well, how about, and for ordinary conventional business analytics, Spark is like an over-engineered solution to the problem. SQL works just great. What's happened in the past couple years, and what's going to continue to happen is that SQL is insinuating itself into everything we're seeing. We're seeing all the major data lake providers offering SQL support, whether it's Databricks or... And of course, Snowflake is loving this, because that is what they do, and their success is certainly points to the success of SQL, even MongoDB. And we were all, I think, at the MongoDB conference where on one day, we hear SQL is dead. They're not teaching SQL in schools anymore, and this kind of thing. And then, a couple days later at the same conference, they announced we're adding a new analytic capability-based on SQL. But didn't you just say SQL is dead? So, the reality is that SQL is better understood than most other methods of certainly of retrieving and finding data in a data collection, no matter whether it happens to be relational or non-relational. And even in systems that are very non-relational, such as graph and document databases, their query languages are being built or extended to resemble SQL, because SQL is something people understand. >> Now, you remember when we were in high school and you had had to take the... Your debating in the class and you were forced to take one side and defend it. So, I was was at a Vertica conference one time up on stage with Curt Monash, and I had to take the NoSQL, the world is changing paradigm shift. And so just to be controversial, I said to him, Curt Monash, I said, who really needs acid compliance anyway? Tony Baer. And so, (chuckles) of course, his head exploded, but what are your thoughts (guests laughing) on all this? >> Well, my first thought is congratulations, Dave, for surviving being up on stage with Curt Monash. >> Amen. (group laughing) >> I definitely would concur with Carl. We actually are definitely seeing a SQL renaissance and if there's any proof of the pudding here, I see lakehouse is being icing on the cake. As Doug had predicted last year, now, (clears throat) for the record, I think, Doug was about a year ahead of time in his predictions that this year is really the year that I see (clears throat) the lakehouse ecosystems really firming up. You saw the first shots last year. But anyway, on this, data lakes will not go away. I've actually, I'm on the home stretch of doing a market, a landscape on the lakehouse. And lakehouse will not replace data lakes in terms of that. There is the need for those, data scientists who do know Python, who knows Spark, to go in there and basically do their thing without all the restrictions or the constraints of a pre-built, pre-designed table structure. I get that. Same thing for developing models. But on the other hand, there is huge need. Basically, (clears throat) maybe MongoDB was saying that we're not teaching SQL anymore. Well, maybe we have an oversupply of SQL developers. Well, I'm being facetious there, but there is a huge skills based in SQL. Analytics have been built on SQL. They came with lakehouse and why this really helps to fuel a SQL revival is that the core need in the data lake, what brought on the lakehouse was not so much SQL, it was a need for acid. And what was the best way to do it? It was through a relational table structure. So, the whole idea of acid in the lakehouse was not to turn it into a transaction database, but to make the data trusted, secure, and more granularly governed, where you could govern down to column and row level, which you really could not do in a data lake or a file system. So, while lakehouse can be queried in a manner, you can go in there with Python or whatever, it's built on a relational table structure. And so, for that end, for those types of data lakes, it becomes the end state. You cannot bypass that table structure as I learned the hard way during my research. So, the bottom line I'd say here is that lakehouse is proof that we're starting to see the revenge of the SQL nerds. (Dave chuckles) >> Excellent. Okay, let's bring up back up the predictions. Dave Menninger, this one's really thought-provoking and interesting. We're hearing things like data as code, new data applications, machines actually generating plans with no human involvement. And your prediction is the definition of data is expanding. What do you mean by that? >> So, I think, for too long, we've thought about data as the, I would say facts that we collect the readings off of devices and things like that, but data on its own is really insufficient. Organizations need to manipulate that data and examine derivatives of the data to really understand what's happening in their organization, why has it happened, and to project what might happen in the future. And my comment is that these data derivatives need to be supported and managed just like the data needs to be managed. We can't treat this as entirely separate. Think about all the governance discussions we've had. Think about the metadata discussions we've had. If you separate these things, now you've got more moving parts. We're talking about simplicity and simplifying the stack. So, if these things are treated separately, it creates much more complexity. I also think it creates a little bit of a myopic view on the part of the IT organizations that are acquiring these technologies. They need to think more broadly. So, for instance, metrics. Metric stores are becoming much more common part of the tooling that's part of a data platform. Similarly, feature stores are gaining traction. So, those are designed to promote the reuse and consistency across the AI and ML initiatives. The elements that are used in developing an AI or ML model. And let me go back to metrics and just clarify what I mean by that. So, any type of formula involving the data points. I'm distinguishing metrics from features that are used in AI and ML models. And the data platforms themselves are increasingly managing the models as an element of data. So, just like figuring out how to calculate a metric. Well, if you're going to have the features associated with an AI and ML model, you probably need to be managing the model that's associated with those features. The other element where I see expansion is around external data. Organizations for decades have been focused on the data that they generate within their own organization. We see more and more of these platforms acquiring and publishing data to external third-party sources, whether they're within some sort of a partner ecosystem or whether it's a commercial distribution of that information. And our research shows that when organizations use external data, they derive even more benefits from the various analyses that they're conducting. And the last great frontier in my opinion on this expanding world of data is the world of driver-based planning. Very few of the major data platform providers provide these capabilities today. These are the types of things you would do in a spreadsheet. And we all know the issues associated with spreadsheets. They're hard to govern, they're error-prone. And so, if we can take that type of analysis, collecting the occupancy of a rental property, the projected rise in rental rates, the fluctuations perhaps in occupancy, the interest rates associated with financing that property, we can project forward. And that's a very common thing to do. What the income might look like from that property income, the expenses, we can plan and purchase things appropriately. So, I think, we need this broader purview and I'm beginning to see some of those things happen. And the evidence today I would say, is more focused around the metric stores and the feature stores starting to see vendors offer those capabilities. And we're starting to see the ML ops elements of managing the AI and ML models find their way closer to the data platforms as well. >> Very interesting. When I hear metrics, I think of KPIs, I think of data apps, orchestrate people and places and things to optimize around a set of KPIs. It sounds like a metadata challenge more... Somebody once predicted they'll have more metadata than data. Carl, what are your thoughts on this prediction? >> Yeah, I think that what Dave is describing as data derivatives is in a way, another word for what I was calling operational metadata, which not about the data itself, but how it's used, where it came from, what the rules are governing it, and that kind of thing. If you have a rich enough set of those things, then not only can you do a model of how well your vacation property rental may do in terms of income, but also how well your application that's measuring that is doing for you. In other words, how many times have I used it, how much data have I used and what is the relationship between the data that I've used and the benefits that I've derived from using it? Well, we don't have ways of doing that. What's interesting to me is that folks in the content world are way ahead of us here, because they have always tracked their content using these kinds of attributes. Where did it come from? When was it created, when was it modified? Who modified it? And so on and so forth. We need to do more of that with the structure data that we have, so that we can track what it's used. And also, it tells us how well we're doing with it. Is it really benefiting us? Are we being efficient? Are there improvements in processes that we need to consider? Because maybe data gets created and then it isn't used or it gets used, but it gets altered in some way that actually misleads people. (laughs) So, we need the mechanisms to be able to do that. So, I would say that that's... And I'd say that it's true that we need that stuff. I think, that starting to expand is probably the right way to put it. It's going to be expanding for some time. I think, we're still a distance from having all that stuff really working together. >> Maybe we should say it's gestating. (Dave and Carl laughing) >> Sorry, if I may- >> Sanjeev, yeah, I was going to say this... Sanjeev, please comment. This sounds to me like it supports Zhamak Dehghani's principles, but please. >> Absolutely. So, whether we call it data mesh or not, I'm not getting into that conversation, (Dave chuckles) but data (audio breaking) (Tony laughing) everything that I'm hearing what Dave is saying, Carl, this is the year when data products will start to take off. I'm not saying they'll become mainstream. They may take a couple of years to become so, but this is data products, all this thing about vacation rentals and how is it doing, that data is coming from different sources. I'm packaging it into our data product. And to Carl's point, there's a whole operational metadata associated with it. The idea is for organizations to see things like developer productivity, how many releases am I doing of this? What data products are most popular? I'm actually in right now in the process of formulating this concept that just like we had data catalogs, we are very soon going to be requiring data products catalog. So, I can discover these data products. I'm not just creating data products left, right, and center. I need to know, do they already exist? What is the usage? If no one is using a data product, maybe I want to retire and save cost. But this is a data product. Now, there's a associated thing that is also getting debated quite a bit called data contracts. And a data contract to me is literally just formalization of all these aspects of a product. How do you use it? What is the SLA on it, what is the quality that I am prescribing? So, data product, in my opinion, shifts the conversation to the consumers or to the business people. Up to this point when, Dave, you're talking about data and all of data discovery curation is a very data producer-centric. So, I think, we'll see a shift more into the consumer space. >> Yeah. Dave, can I just jump in there just very quickly there, which is that what Sanjeev has been saying there, this is really central to what Zhamak has been talking about. It's basically about making, one, data products are about the lifecycle management of data. Metadata is just elemental to that. And essentially, one of the things that she calls for is making data products discoverable. That's exactly what Sanjeev was talking about. >> By the way, did everyone just no notice how Sanjeev just snuck in another prediction there? So, we've got- >> Yeah. (group laughing) >> But you- >> Can we also say that he snuck in, I think, the term that we'll remember today, which is metadata museums. >> Yeah, but- >> Yeah. >> And also comment to, Tony, to your last year's prediction, you're really talking about it's not something that you're going to buy from a vendor. >> No. >> It's very specific >> Mm-hmm. >> to an organization, their own data product. So, touche on that one. Okay, last prediction. Let's bring them up. Doug Henschen, BI analytics is headed to embedding. What does that mean? >> Well, we all know that conventional BI dashboarding reporting is really commoditized from a vendor perspective. It never enjoyed truly mainstream adoption. Always that 25% of employees are really using these things. I'm seeing rising interest in embedding concise analytics at the point of decision or better still, using analytics as triggers for automation and workflows, and not even necessitating human interaction with visualizations, for example, if we have confidence in the analytics. So, leading companies are pushing for next generation applications, part of this low-code, no-code movement we've seen. And they want to build that decision support right into the app. So, the analytic is right there. Leading enterprise apps vendors, Salesforce, SAP, Microsoft, Oracle, they're all building smart apps with the analytics predictions, even recommendations built into these applications. And I think, the progressive BI analytics vendors are supporting this idea of driving insight to action, not necessarily necessitating humans interacting with it if there's confidence. So, we want prediction, we want embedding, we want automation. This low-code, no-code development movement is very important to bringing the analytics to where people are doing their work. We got to move beyond the, what I call swivel chair integration, between where people do their work and going off to separate reports and dashboards, and having to interpret and analyze before you can go back and do take action. >> And Dave Menninger, today, if you want, analytics or you want to absorb what's happening in the business, you typically got to go ask an expert, and then wait. So, what are your thoughts on Doug's prediction? >> I'm in total agreement with Doug. I'm going to say that collectively... So, how did we get here? I'm going to say collectively as an industry, we made a mistake. We made BI and analytics separate from the operational systems. Now, okay, it wasn't really a mistake. We were limited by the technology available at the time. Decades ago, we had to separate these two systems, so that the analytics didn't impact the operations. You don't want the operations preventing you from being able to do a transaction. But we've gone beyond that now. We can bring these two systems and worlds together and organizations recognize that need to change. As Doug said, the majority of the workforce and the majority of organizations doesn't have access to analytics. That's wrong. (chuckles) We've got to change that. And one of the ways that's going to change is with embedded analytics. 2/3 of organizations recognize that embedded analytics are important and it even ranks higher in importance than AI and ML in those organizations. So, it's interesting. This is a really important topic to the organizations that are consuming these technologies. The good news is it works. Organizations that have embraced embedded analytics are more comfortable with self-service than those that have not, as opposed to turning somebody loose, in the wild with the data. They're given a guided path to the data. And the research shows that 65% of organizations that have adopted embedded analytics are comfortable with self-service compared with just 40% of organizations that are turning people loose in an ad hoc way with the data. So, totally behind Doug's predictions. >> Can I just break in with something here, a comment on what Dave said about what Doug said, which (laughs) is that I totally agree with what you said about embedded analytics. And at IDC, we made a prediction in our future intelligence, future of intelligence service three years ago that this was going to happen. And the thing that we're waiting for is for developers to build... You have to write the applications to work that way. It just doesn't happen automagically. Developers have to write applications that reference analytic data and apply it while they're running. And that could involve simple things like complex queries against the live data, which is through something that I've been calling analytic transaction processing. Or it could be through something more sophisticated that involves AI operations as Doug has been suggesting, where the result is enacted pretty much automatically unless the scores are too low and you need to have a human being look at it. So, I think that that is definitely something we've been watching for. I'm not sure how soon it will come, because it seems to take a long time for people to change their thinking. But I think, as Dave was saying, once they do and they apply these principles in their application development, the rewards are great. >> Yeah, this is very much, I would say, very consistent with what we were talking about, I was talking about before, about basically rethinking the modern data stack and going into more of an end-to-end solution solution. I think, that what we're talking about clearly here is operational analytics. There'll still be a need for your data scientists to go offline just in their data lakes to do all that very exploratory and that deep modeling. But clearly, it just makes sense to bring operational analytics into where people work into their workspace and further flatten that modern data stack. >> But with all this metadata and all this intelligence, we're talking about injecting AI into applications, it does seem like we're entering a new era of not only data, but new era of apps. Today, most applications are about filling forms out or codifying processes and require a human input. And it seems like there's enough data now and enough intelligence in the system that the system can actually pull data from, whether it's the transaction system, e-commerce, the supply chain, ERP, and actually do something with that data without human involvement, present it to humans. Do you guys see this as a new frontier? >> I think, that's certainly- >> Very much so, but it's going to take a while, as Carl said. You have to design it, you have to get the prediction into the system, you have to get the analytics at the point of decision has to be relevant to that decision point. >> And I also recall basically a lot of the ERP vendors back like 10 years ago, we're promising that. And the fact that we're still looking at the promises shows just how difficult, how much of a challenge it is to get to what Doug's saying. >> One element that could be applied in this case is (indistinct) architecture. If applications are developed that are event-driven rather than following the script or sequence that some programmer or designer had preconceived, then you'll have much more flexible applications. You can inject decisions at various points using this technology much more easily. It's a completely different way of writing applications. And it actually involves a lot more data, which is why we should all like it. (laughs) But in the end (Tony laughing) it's more stable, it's easier to manage, easier to maintain, and it's actually more efficient, which is the result of an MIT study from about 10 years ago, and still, we are not seeing this come to fruition in most business applications. >> And do you think it's going to require a new type of data platform database? Today, data's all far-flung. We see that's all over the clouds and at the edge. Today, you cache- >> We need a super cloud. >> You cache that data, you're throwing into memory. I mentioned, MySQL heat wave. There are other examples where it's a brute force approach, but maybe we need new ways of laying data out on disk and new database architectures, and just when we thought we had it all figured out. >> Well, without referring to disk, which to my mind, is almost like talking about cave painting. I think, that (Dave laughing) all the things that have been mentioned by all of us today are elements of what I'm talking about. In other words, the whole improvement of the data mesh, the improvement of metadata across the board and improvement of the ability to track data and judge its freshness the way we judge the freshness of a melon or something like that, to determine whether we can still use it. Is it still good? That kind of thing. Bringing together data from multiple sources dynamically and real-time requires all the things we've been talking about. All the predictions that we've talked about today add up to elements that can make this happen. >> Well, guys, it's always tremendous to get these wonderful minds together and get your insights, and I love how it shapes the outcome here of the predictions, and let's see how we did. We're going to leave it there. I want to thank Sanjeev, Tony, Carl, David, and Doug. Really appreciate the collaboration and thought that you guys put into these sessions. Really, thank you. >> Thank you. >> Thanks, Dave. >> Thank you for having us. >> Thanks. >> Thank you. >> All right, this is Dave Valente for theCUBE, signing off for now. Follow these guys on social media. Look for coverage on siliconangle.com, theCUBE.net. Thank you for watching. (upbeat music)
SUMMARY :
and pleased to tell you (Tony and Dave faintly speaks) that led them to their conclusion. down, the funding in VC IPO market. And I like how the fact And I happened to have tripped across I talked to Walmart in the prediction of graph databases. But I stand by the idea and maybe to the edge. You can apply graphs to great And so, it's going to streaming data permeates the landscape. and to be honest, I like the tough grading the next 20 to 25% of and of course, the degree of difficulty. that sits on the side, Thank you for that. And I have to disagree. So, the catalog becomes Do you have any stats for just the reasons that And a lot of those catalogs about the modern data stack. and more, the data lakehouse. and the application stack, So, the alternative is to have metadata that SQL is the killer app for big data. but in the perception of the marketplace, and I had to take the NoSQL, being up on stage with Curt Monash. (group laughing) is that the core need in the data lake, And your prediction is the and examine derivatives of the data to optimize around a set of KPIs. that folks in the content world (Dave and Carl laughing) going to say this... shifts the conversation to the consumers And essentially, one of the things (group laughing) the term that we'll remember today, to your last year's prediction, is headed to embedding. and going off to separate happening in the business, so that the analytics didn't And the thing that we're waiting for and that deep modeling. that the system can of decision has to be relevant And the fact that we're But in the end We see that's all over the You cache that data, and improvement of the and I love how it shapes the outcome here Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
Dave Menninger | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Carl Olofson | PERSON | 0.99+ |
Dave Menninger | PERSON | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Dave Valente | PERSON | 0.99+ |
Collibra | ORGANIZATION | 0.99+ |
Curt Monash | PERSON | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Christian Kleinerman | PERSON | 0.99+ |
Dave Valente | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sanjeev | PERSON | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Hazelcast | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Tony Bear | PERSON | 0.99+ |
25% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
last year | DATE | 0.99+ |
65% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
five-year | QUANTITY | 0.99+ |
TigerGraph | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two services | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
RisingWave Labs | ORGANIZATION | 0.99+ |
Jesse Cugliotta & Nicholas Taylor | The Future of Cloud & Data in Healthcare
(upbeat music) >> Welcome back to Supercloud 2. This is Dave Vellante. We're here exploring the intersection of data and analytics in the future of cloud and data. In this segment, we're going to look deeper into the life sciences business with Jesse Cugliotta, who leads the Healthcare and Life Sciences industry practice at Snowflake. And Nicholas Nick Taylor, who's the executive director of Informatics at Ionis Pharmaceuticals. Gentlemen, thanks for coming in theCUBE and participating in the program. Really appreciate it. >> Thank you for having us- >> Thanks for having me. >> You're very welcome, okay, we're go really try to look at data sharing as a use case and try to understand what's happening in the healthcare industry generally and specifically, how Nick thinks about sharing data in a governed fashion whether tapping the capabilities of multiple clouds is advantageous long term or presents more challenges than the effort is worth. And to start, Jesse, you lead this industry practice for Snowflake and it's a challenging and vibrant area. It's one that's hyper-focused on data privacy. So the first question is, you know there was a time when healthcare and other regulated industries wouldn't go near the cloud. What are you seeing today in the industry around cloud adoption and specifically multi-cloud adoption? >> Yeah, for years I've heard that healthcare and life sciences has been cloud diverse, but in spite of all of that if you look at a lot of aspects of this industry today, they've been running in the cloud for over 10 years now. Particularly when you look at CRM technologies or HR or HCM, even clinical technologies like EDC or ETMF. And it's interesting that you mentioned multi-cloud as well because this has always been an underlying reality especially within life sciences. This industry grows through acquisition where companies are looking to boost their future development pipeline either by buying up smaller biotechs, they may have like a late or a mid-stage promising candidate. And what typically happens is the larger pharma could then use their commercial muscle and their regulatory experience to move it to approvals and into the market. And I think the last few decades of cheap capital certainly accelerated that trend over the last couple of years. But this typically means that these new combined institutions may have technologies that are running on multiple clouds or multiple cloud strategies in various different regions to your point. And what we've often found is that they're not planning to standardize everything onto a single cloud provider. They're often looking for technologies that embrace this multi-cloud approach and work seamlessly across them. And I think this is a big reason why we, here at Snowflake, we've seen such strong momentum and growth across this industry because healthcare and life science has actually been one of our fastest growing sectors over the last couple of years. And a big part of that is in fact that we run on not only all three major cloud providers, but individual accounts within each and any one of them, they had the ability to communicate and interoperate with one another, like a globally interconnected database. >> Great, thank you for that setup. And so Nick, tell us more about your role and Ionis Pharma please. >> Sure. So I've been at Ionis for around five years now. You know, when when I joined it was, the IT department was pretty small. There wasn't a lot of warehousing, there wasn't a lot of kind of big data there. We saw an opportunity with Snowflake pretty early on as a provider that would be a lot of benefit for us, you know, 'cause we're small, wanted something that was fairly hands off. You know, I remember the days where you had to get a lot of DBAs in to fine tune your databases, make sure everything was running really, really well. The notion that there's, you know, no indexes to tune, right? There's very few knobs and dials, you can turn on Snowflake. That was appealing that, you know, it just kind of worked. So we found a use case to bring the platform in. We basically used it as a logging replacement as a Splunk kind of replacement with a platform called Elysium Analytics as a way to just get it in the door and give us the opportunity to solve a real world use case, but also to help us start to experiment using Snowflake as a platform. It took us a while to A, get the funding to bring it in, but B, build the momentum behind it. But, you know, as we experimented we added more data in there, we ran a few more experiments, we piloted in few more applications, we really saw the power of the platform and now, we are becoming a commercial organization. And with that comes a lot of major datasets. And so, you know, we really see Snowflake as being a very important part of our ecology going forward to help us build out our infrastructure. >> Okay, and you are running, your group runs on Azure, it's kind of mono cloud, single cloud, but others within Ionis are using other clouds, but you're not currently, you know, collaborating in terms of data sharing. And I wonder if you could talk about how your data needs have evolved over the past decade. I know you came from another highly regulated industry in financial services. So what's changed? You sort of touched on this before, you had these, you know, very specialized individuals who were, you know, DBAs, and, you know, could tune databases and the like, so that's evolved, but how has generally your needs evolved? Just kind of make an observation over the last, you know, five or seven years. What have you seen? >> Well, we, I wasn't in a group that did a lot of warehousing. It was more like online trade capture, but, you know, it was very much on-prem. You know, being in the cloud is very much a dirty word back then. I know that's changed since I've left. But in, you know, we had major, major teams of everyone who could do everything, right. As I mentioned in the pharma organization, there's a lot fewer of us. So the data needs there are very different, right? It's, we have a lot of SaaS applications. One of the difficulties with bringing a lot of SaaS applications on board is obviously data integration. So making sure the data is the same between them. But one of the big problems is joining the data across those SaaS applications. So one of the benefits, one of the things that we use Snowflake for is to basically take data out of these SaaS applications and load them into a warehouse so we can do those joins. So we use technologies like Boomi, we use technologies like Fivetran, like DBT to bring this data all into one place and start to kind of join that basically, allow us to do, run experiments, do analysis, basically take better, find better use for our data that was siloed in the past. You mentioned- >> Yeah. And just to add on to Nick's point there. >> Go ahead. >> That's actually something very common that we're seeing across the industry is because a lot of these SaaS applications that you mentioned, Nick, they're with from vendors that are trying to build their own ecosystem in walled garden. And by definition, many of them do not want to integrate with one another. So from a, you know, from a data platform vendor's perspective, we see this as a huge opportunity to help organizations like Ionis and others kind of deal with the challenges that Nick is speaking about because if the individual platform vendors are never going to make that part of their strategy, we see it as a great way to add additional value to these customers. >> Well, this data sharing thing is interesting. There's a lot of walled gardens out there. Oracle is a walled garden, AWS in many ways is a walled garden. You know, Microsoft has its walled garden. You could argue Snowflake is a walled garden. But the, what we're seeing and the whole reason behind the notion of super-cloud is we're creating an abstraction layer where you actually, in this case for this use case, can share data in a governed manner. Let's forget about the cross-cloud for a moment. I'll come back to that, but I wonder, Nick, if you could talk about how you are sharing data, again, Snowflake sort of, it's, I look at Snowflake like the app store, Apple, we're going to control everything, we're going to guarantee with data clean rooms and governance and the standards that we've created within that platform, we're going to make sure that it's safe for you to share data in this highly regulated industry. Are you doing that today? And take us through, you know, the considerations that you have in that regard. >> So it's kind of early days for us in Snowflake in general, but certainly in data sharing, we have a couple of examples. So data marketplace, you know, that's a great invention. It's, I've been a small IT shop again, right? The fact that we are able to just bring down terabyte size datasets straight into our Snowflake and run analytics directly on that is huge, right? The fact that we don't have to FTP these massive files around run jobs that may break, being able to just have that on tap is huge for us. We've recently been talking to one of our CRO feeds- CRO organizations about getting their data feeds in. Historically, this clinical trial data that comes in on an FTP file, we have to process it, take it through the platforms, put it into the warehouse. But one of the CROs that we talked to recently when we were reinvestigate in what data opportunities they have, they were a Snowflake customer and we are, I think, the first production customer they have, have taken that feed. So they're basically exposing their tables of data that historically came in these FTP files directly into our Snowflake instance now. We haven't taken advantage of that. It only actually flipped the switch about three or four weeks ago. But that's pretty big for us again, right? We don't have to worry about maintaining those jobs that take those files in. We don't have to worry about the jobs that take those and shove them on the warehouse. We now have a feed that's directly there that we can use a tool like DBT to push through directly into our model. And then the third avenue that's came up, actually fairly recently as well was genetics data. So genetics data that's highly, highly regulated. We had to be very careful with that. And we had a conversation with Snowflake about the data white rooms practice, and we see that as a pretty interesting opportunity. We are having one organization run genetic analysis being able to send us those genetic datasets, but then there's another organization that's actually has the in quotes "metadata" around that, so age, ethnicity, location, et cetera. And being able to join those two datasets through some kind of mechanism would be really beneficial to the organization. Being able to build a data white room so we can put that genetic data in a secure place, anonymize it, and then share the amalgamated data back out in a way that's able to be joined to the anonymized metadata, that could be pretty huge for us as well. >> Okay, so this is interesting. So you talk about FTP, which was the common way to share data. And so you basically, it's so, I got it now you take it and do whatever you want with it. Now we're talking, Jesse, about sharing the same copy of live data. How common is that use case in your industry? >> It's become very common over the last couple of years. And I think a big part of it is having the right technology to do it effectively. You know, as Nick mentioned, historically, this was done by people sending files around. And the challenge with that approach, of course, while there are multiple challenges, one, every time you send a file around your, by definition creating a copy of the data because you have to pull it out of your system of record, put it into a file, put it on some server where somebody else picks it up. And by definition at that point you've lost governance. So this creates challenges in general hesitation to doing so. It's not that it hasn't happened, but the other challenge with it is that the data's no longer real time. You know, you're working with a copy of data that was as fresh as at the time at that when that was actually extracted. And that creates limitations in terms of how effective this can be. What we're starting to see now with some of our customers is live sharing of information. And there's two aspects of that that are important. One is that you're not actually physically creating the copy and sending it to someone else, you're actually exposing it from where it exists and allowing another consumer to interact with it from their own account that could be in another region, some are running in another cloud. So this concept of super-cloud or cross-cloud could becoming realized here. But the other important aspect of it is that when that other- when that other entity is querying your data, they're seeing it in a real time state. And this is particularly important when you think about use cases like supply chain planning, where you're leveraging data across various different enterprises. If I'm a manufacturer or if I'm a contract manufacturer and I can see the actual inventory positions of my clients, of my distributors, of the levels of consumption at the pharmacy or the hospital that gives me a lot of indication as to how my demand profile is changing over time versus working with a static picture that may have been from three weeks ago. And this has become incredibly important as supply chains are becoming more constrained and the ability to plan accurately has never been more important. >> Yeah. So the race is on to solve these problems. So it start, we started with, hey, okay, cloud, Dave, we're going to simplify database, we're going to put it in the cloud, give virtually infinite resources, separate compute from storage. Okay, check, we got that. Now we've moved into sort of data clean rooms and governance and you've got an ecosystem that's forming around this to make it safer to share data. And then, you know, nirvana, at least near term nirvana is we're going to build data applications and we're going to be able to share live data and then you start to get into monetization. Do you see, Nick, in the near future where I know you've got relationships with, for instance, big pharma like AstraZeneca, do you see a situation where you start sharing data with them? Is that in the near term? Is that more long term? What are the considerations in that regard? >> I mean, it's something we've been thinking about. We haven't actually addressed that yet. Yeah, I could see situations where, you know, some of these big relationships where we do need to share a lot of data, it would be very nice to be able to just flick a switch and share our data assets across to those organizations. But, you know, that's a ways off for us now. We're mainly looking at bringing data in at the moment. >> One of the things that we've seen in financial services in particular, and Jesse, I'd love to get your thoughts on this, is companies like Goldman or Capital One or Nasdaq taking their stack, their software, their tooling actually putting it on the cloud and facing it to their customers and selling that as a new monetization vector as part of their digital or business transformation. Are you seeing that Jesse at all in healthcare or is it happening today or do you see a day when that happens or is healthier or just too scary to do that? >> No, we're seeing the early stages of this as well. And I think it's for some of the reasons we talked about earlier. You know, it's a much more secure way to work with a colleague if you don't have to copy your data and potentially expose it. And some of the reasons that people have historically copied that data is that they needed to leverage some sort of algorithm or application that a third party was providing. So maybe someone was predicting the ideal location and run a clinical trial for this particular rare disease category where there are only so many patients around the world that may actually be candidates for this disease. So you have to pick the ideal location. Well, sending the dataset to do so, you know, would involve a fairly complicated process similar to what Nick was mentioning earlier. If the company who was providing the logic or the algorithm to determine that location could bring that algorithm to you and you run it against your own data, that's a much more ideal and a much safer and more secure way for this industry to actually start to work with some of these partners and vendors. And that's one of the things that we're looking to enable going into this year is that, you know, the whole concept should be bring the logic to your data versus your data to the logic and the underlying sharing mechanisms that we've spoken about are actually what are powering that today. >> And so thank you for that, Jesse. >> Yes, Dave. >> And so Nick- Go ahead please. >> Yeah, if I could add, yeah, if I could add to that, that's something certainly we've been thinking about. In fact, we'd started talking to Snowflake about that a couple of years ago. We saw the power there again of the platform to be able to say, well, could we, we were thinking in more of a data share, but could we share our data out to say an AI/ML vendor, have them do the analytics and then share the data, the results back to us. Now, you know, there's more powerful mechanisms to do that within the Snowflake ecosystem now, but you know, we probably wouldn't need to have onsite AI/ML people, right? Some of that stuff's very sophisticated, expensive resources, hard to find, you know, it's much better for us to find a company that would be able to build those analytics, maintain those analytics for us. And you know, we saw an opportunity to do that a couple years ago and we're kind of excited about the opportunity there that we can just basically do it with a no op, right? We share the data route, we have the analytics done, we get the result back and it's just fairly seamless. >> I mean, I could have a whole another Cube session on this, guys, but I mean, I just did a a session with Andy Thurai, a Constellation research about how difficult it's been for organization to get ROI because they don't have the expertise in house so they want to either outsource it or rely on vendor R&D companies to inject that AI and machine intelligence directly into applications. My follow-up question to you Nick is, when you think about, 'cause Jesse was talking about, you know, let the data basically stay where it is and you know bring the compute to that data. If that data lives on different clouds, and maybe it's not your group, but maybe it's other parts of Ionis or maybe it's your partners like AstraZeneca, or you know, the AI/ML partners and they're potentially on other clouds or that data is on other clouds. Do you see that, again, coming back to super-cloud, do you see it as an advantage to be able to have a consistent experience across those clouds? Or is that just kind of get in the way and make things more complex? What's your take on that, Nick? >> Well, from the vendors, so from the client side, it's kind of seamless with Snowflake for us. So we know for a fact that one of the datasets we have at the moment, Compile, which is a, the large multi terabyte dataset I was talking about. They're on AWS on the East Coast and we are on Azure on the West Coast. And they had to do a few tweaks in the background to make sure the data was pushed over from, but from my point of view, the data just exists, right? So for me, I think it's hugely beneficial that Snowflake supports this kind of infrastructure, right? We don't have to jump through hoops to like, okay, well, we'll download it here and then re-upload it here. They already have the mechanism in the background to do these multi-cloud shares. So it's not important for us internally at the moment. I could see potentially at some point where we start linking across different groups in the organization that do have maybe Amazon or Google Cloud, but certainly within our providers. We know for a fact that they're on different services at the moment and it just works. >> Yeah, and we learned from Benoit Dageville, who came into the studio on August 9th with first Supercloud in 2022 that Snowflake uses a single global instance across regions and across clouds, yeah, whether or not you can query across you know, big regions, it just depends, right? It depends on latency. You might have to make a copy or maybe do some tweaks in the background. But guys, we got to jump, I really appreciate your time. Really thoughtful discussion on the future of data and cloud, specifically within healthcare and pharma. Thank you for your time. >> Thanks- >> Thanks for having us. >> All right, this is Dave Vellante for theCUBE team and my co-host, John Furrier. Keep it right there for more action at Supercloud 2. (upbeat music)
SUMMARY :
and analytics in the So the first question is, you know And it's interesting that you Great, thank you for that setup. get the funding to bring it in, over the last, you know, So one of the benefits, one of the things And just to add on to Nick's point there. that you mentioned, Nick, and the standards that we've So data marketplace, you know, And so you basically, it's so, And the challenge with Is that in the near term? bringing data in at the moment. One of the things that we've seen that algorithm to you and you And so Nick- the results back to us. Or is that just kind of get in the way in the background to do on the future of data and cloud, All right, this is Dave Vellante
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jesse Cugliotta | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Goldman | ORGANIZATION | 0.99+ |
AstraZeneca | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Jesse | PERSON | 0.99+ |
Andy Thurai | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
August 9th | DATE | 0.99+ |
Nick | PERSON | 0.99+ |
Nasdaq | ORGANIZATION | 0.99+ |
Nicholas Nick Taylor | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ionis | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Ionis Pharma | ORGANIZATION | 0.99+ |
Nicholas Taylor | PERSON | 0.99+ |
Ionis Pharmaceuticals | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Benoit Dageville | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
today | DATE | 0.99+ |
over 10 years | QUANTITY | 0.98+ |
Snowflake | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
two aspects | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
two datasets | QUANTITY | 0.97+ |
West Coast | LOCATION | 0.97+ |
four weeks ago | DATE | 0.97+ |
around five years | QUANTITY | 0.97+ |
three | QUANTITY | 0.95+ |
first production | QUANTITY | 0.95+ |
East Coast | LOCATION | 0.95+ |
third avenue | QUANTITY | 0.95+ |
one organization | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
couple years ago | DATE | 0.93+ |
single cloud | QUANTITY | 0.92+ |
single cloud provider | QUANTITY | 0.92+ |
hree weeks ago | DATE | 0.91+ |
one place | QUANTITY | 0.88+ |
Azure | TITLE | 0.86+ |
last couple of years | DATE | 0.85+ |
Veronika Durgin, Saks | The Future of Cloud & Data
(upbeat music) >> Welcome back to Supercloud 2, an open collaborative where we explore the future of cloud and data. Now, you might recall last August at the inaugural Supercloud event we validated the technical feasibility and tried to further define the essential technical characteristics, and of course the deployment models of so-called supercloud. That is, sets of services that leverage the underlying primitives of hyperscale clouds, but are creating new value on top of those clouds for organizations at scale. So we're talking about capabilities that fundamentally weren't practical or even possible prior to the ascendancy of the public clouds. And so today at Supercloud 2, we're digging further into the topic with input from real-world practitioners. And we're exploring the intersection of data and cloud, And importantly, the realities and challenges of deploying technology for a new business capability. I'm pleased to have with me in our studios, west of Boston, Veronika Durgin, who's the head of data at Saks. Veronika, welcome. Great to see you. Thanks for coming on. >> Thank you so much. Thank you for having me. So excited to be here. >> And so we have to say upfront, you're here, these are your opinions. You're not representing Saks in any way. So we appreciate you sharing your depth of knowledge with us. >> Thank you, Dave. Yeah, I've been doing data for a while. I try not to say how long anymore. It's been a while. But yeah, thank you for having me. >> Yeah, you're welcome. I mean, one of the highlights of this past year for me was hanging out at the airport with you after the Snowflake Summit. And we were just chatting about sort of data mesh, and you were saying, "Yeah, but." There was a yeah, but. You were saying there's some practical realities of actually implementing these things. So I want to get into some of that. And I guess starting from a perspective of how data has changed, you've seen a lot of the waves. I mean, even if we go back to pre-Hadoop, you know, that would shove everything into an Oracle database, or, you know, Hadoop was going to save our data lives. And the cloud came along and, you know, that was kind of a disruptive force. And, you know, now we see things like, whether it's Snowflake or Databricks or these other platforms on top of the clouds. How have you observed the change in data and the evolution over time? >> Yeah, so I started as a DBA in the data center, kind of like, you know, growing up trying to manage whatever, you know, physical limitations a server could give us. So we had to be very careful of what we put in our database because we were limited. We, you know, purchased that piece of hardware, and we had to use it for the next, I don't know, three to five years. So it was only, you know, we focused on only the most important critical things. We couldn't keep too much data. We had to be super efficient. We couldn't add additional functionality. And then Hadoop came along, which is like, great, we can dump all the data there, but then we couldn't get data out of it. So it was like, okay, great. Doesn't help either. And then the cloud came along, which was incredible. I was probably the most excited person. I'm lying, but I was super excited because I no longer had to worry about what I can actually put in my database. Now I have that, you know, scalability and flexibility with the cloud. So okay, great, that data's there, and I can also easily get it out of it, which is really incredible. >> Well, but so, I'm inferring from what you're saying with Hadoop, it was like, okay, no schema on write. And then you got to try to make sense out of it. But so what changed with the cloud? What was different? >> So I'll tell a funny story. I actually successfully avoided Hadoop. The only time- >> Congratulations. >> (laughs) I know, I'm like super proud of it. I don't know how that happened, but the only time I worked for a company that had Hadoop, all I remember is that they were running jobs that were taking over 24 hours to get data out of it. And they were realizing that, you know, dumping data without any structure into this massive thing that required, you know, really skilled engineers wasn't really helpful. So what changed, and I'm kind of thinking of like, kind of like how Snowflake started, right? They were marketing themselves as a data warehouse. For me, moving from SQL Server to Snowflake was a non-event. It was comfortable, I knew what it was, I knew how to get data out of it. And I think that's the important part, right? Cloud, this like, kind of like, vague, high-level thing, magical, but the reality is cloud is the same as what we had on prem. So it's comfortable there. It's not scary. You don't need super new additional skills to use it. >> But you're saying what's different is the scale. So you can throw resources at it. You don't have to worry about depreciating your hardware over three to five years. Hey, I have an asset that I have to take advantage of. Is that the big difference? >> Absolutely. Actually, from kind of like operational perspective, which it's funny. Like, I don't have to worry about it. I use what I need when I need it. And not to take this completely in the opposite direction, people stop thinking about using things in a very smart way, right? You like, scale and you walk away. And then, you know, the cool thing about cloud is it's scalable, but you also should not use it when you don't need it. >> So what about this idea of multicloud. You know, supercloud sort of tries to go beyond multicloud. it's like multicloud by accident. And now, you know, whether it's M&A or, you know, some Skunkworks is do, hey, I like Google's tools, so I'm going to use Google. And then people like you are called on to, hey, how do we clean up this mess? And you know, you and I, at the airport, we were talking about data mesh. And I love the concept. Like, doesn't matter if it's a data lake or a data warehouse or a data hub or an S3 bucket. It's just a node on the mesh. But then, of course, you've got to govern it. You've got to give people self-serve. But this multicloud is a reality. So from your perspective, from a practitioner's perspective, what are the advantages of multicloud? We talk about the disadvantages all the time. Kind of get that, but what are the advantages? >> So I think the first thing when I think multicloud, I actually think high-availability disaster recovery. And maybe it's just how I grew up in the data center, right? We were always worried that if something happened in one area, we want to make sure that we can bring business up very quickly. So to me that's kind of like where multicloud comes to mind because, you know, you put your data, your applications, let's pick on AWS for a second and, you know, US East in AWS, which is the busiest kind of like area that they have. If it goes down, for my business to continue, I would probably want to move it to, say, Azure, hypothetically speaking, again, or Google, whatever that is. So to me, and probably again based on my background, disaster recovery high availability comes to mind as multicloud first, but now the other part of it is that there are, you know, companies and tools and applications that are being built in, you know, pick your cloud. How do we talk to each other? And more importantly, how do we data share? You know, I work with data. You know, this is what I do. So if, you know, I want to get data from a company that's using, say, Google, how do we share it in a smooth way where it doesn't have to be this crazy, I don't know, SFTP file moving. So that's where I think supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> So you kind of answered my next question, is do you see use cases going beyond H? I mean, the HADR was, remember, that was the original cloud use case. That and bursting, you know, for, you know, Thanksgiving or, you know, for Black Friday. So you see an opportunity to go beyond that with practical use cases. >> Absolutely. I think, you know, we're getting to a world where every company is a data company. We all collect a lot of data. We want to use it for whatever that is. It doesn't necessarily mean sell it, but use it to our competitive advantage. So how do we do it in a very smooth, easy way, which opens additional opportunities for companies? >> You mentioned data sharing. And that's obviously, you know, I met you at Snowflake Summit. That's a big thing of Snowflake's. And of course, you've got Databricks trying to do similar things with open technology. What do you see as the trade-offs there? Because Snowflake, you got to come into their party, you're in their world, and you're kind of locked into that world. Now they're trying to open up. You know, and of course, Databricks, they don't know our world is wide open. Well, we know what that means, you know. The governance. And so now you're seeing, you saw Amazon come out with data clean rooms, which was, you know, that was a good idea that Snowflake had several years before. It's good. It's good validation. So how do you think about the trade-offs between kind of openness and freedom versus control? Is the latter just far more important? >> I'll tell you it depends, right? It's kind of like- >> Could be insulting to that. >> Yeah, I know. It depends because I don't know the answer. It depends, I think, because on the use case and application, ultimately every company wants to make money. That's the beauty of our like, capitalistic economy, right? We're driven 'cause we want to make money. But from the use, you know, how do I sell a product to somebody who's in Google if I am in AWS, right? It's like, we're limiting ourselves if we just do one cloud. But again, it's difficult because at the same time, every cloud provider wants for you to be locked in their cloud, which is why probably, you know, whoever has now data sharing because they want you to stay within their ecosystem. But then again, like, companies are limited. You know, there are applications that are starting to be built on top of clouds. How do we ensure that, you know, I can use that application regardless what cloud, you know, my company is using or I just happen to like. >> You know, and it's true they want you to stay in their ecosystem 'cause they'll make more money. But as well, you think about Apple, right? Does Apple do it 'cause they can make more money? Yes, but it's also they have more control, right? Am I correct that technically it's going to be easier to govern that data if it's all the sort of same standard, right? >> Absolutely. 100%. I didn't answer that question. You have to govern and you have to control. And honestly, it's like it's not like a nice-to-have anymore. There are compliances. There are legal compliances around data. Everybody at some point wants to ensure that, you know, and as a person, quite honestly, you know, not to be, you know, I don't like when my data's used when I don't know how. Like, it's a little creepy, right? So we have to come up with standards around that. But then I also go back in the day. EDI, right? Electronic data interchange. That was figured out. There was standards. Companies were sending data to each other. It was pretty standard. So I don't know. Like, we'll get there. >> Yeah, so I was going to ask you, do you see a day where open standards actually emerge to enable that? And then isn't that the great disruptor to sort of kind of the proprietary stack? >> I think so. I think for us to smoothly exchange data across, you know, various systems, various applications, we'll have to agree to have standards. >> From a developer perspective, you know, back to the sort of supercloud concept, one of the the components of the essential characteristics is you've got this PaaS layer that provides consistency across clouds, and it has unique attributes specific to the purpose of that supercloud. So in the instance of Snowflake, it's data sharing. In the case of, you know, VMware, it might be, you know, infrastructure or self-serve infrastructure that's consistent. From a developer perspective, what do you hear from developers in terms of what they want? Are we close to getting that across clouds? >> I think developers always want freedom and ability to engineer. And oftentimes it's not, (laughs) you know, just as an engineer, I always want to build something, and it's not always for the, to use a specific, you know, it's something I want to do versus what is actually applicable. I think we'll land there, but not because we are, you know, out of the kindness of our own hearts. I think as a necessity we will have to agree to standards, and that that'll like, move the needle. Yeah. >> What are the limitations that you see of cloud and this notion of, you know, even cross cloud, right? I mean, this one cloud can't do it all. You know, but what do you see as the limitations of clouds? >> I mean, it's funny, I always think, you know, again, kind of probably my background, I grew up in the data center. We were physically limited by space, right? That there's like, you can only put, you know, so many servers in the rack and, you know, so many racks in the data center, and then you run out space. Earth has a limited space, right? And we have so many data centers, and everybody's collecting a lot of data that we actually want to use. We're not just collecting for the sake of collecting it anymore. We truly can't take advantage of it because servers have enough power, right, to crank through it. We will run enough space. So how do we balance that? How do we balance that data across all the various data centers? And I know I'm like, kind of maybe talking crazy, but until we figure out how to build a data center on the Moon, right, like, we will have to figure out how to take advantage of all the compute capacity that we have across the world. >> And where does latency fit in? I mean, is it as much of a problem as people sort of think it is? Maybe it depends too. It depends on the use case. But do multiple clouds help solve that problem? Because, you know, even AWS, $80 billion company, they're huge, but they're not everywhere. You know, they're doing local zones, they're doing outposts, which is, you know, less functional than their full cloud. So maybe I would choose to go to another cloud. And if I could have that common experience, that's an advantage, isn't it? >> 100%, absolutely. And potentially there's some maybe pricing tiers, right? So we're talking about latency. And again, it depends on your situation. You know, if you have some sort of medical equipment that is very latency sensitive, you want to make sure that data lives there. But versus, you know, I browse on a website. If the website takes a second versus two seconds to load, do I care? Not exactly. Like, I don't notice that. So we can reshuffle that in a smart way. And I keep thinking of ways. If we have ways for data where it kind of like, oh, you are stuck in traffic, go this way. You know, reshuffle you through that data center. You know, maybe your data will live there. So I think it's totally possible. I know, it's a little crazy. >> No, I like it, though. But remember when you first found ways, you're like, "Oh, this is awesome." And then now it's like- >> And it's like crowdsourcing, right? Like, it's smart. Like, okay, maybe, you know, going to pick on US East for Amazon for a little bit, their oldest, but also busiest data center that, you know, periodically goes down. >> But then you lose your competitive advantage 'cause now it's like traffic socialism. >> Yeah, I know. >> Right? It happened the other day where everybody's going this way up. There's all the Wazers taking. >> And also again, compliance, right? Every country is going down the path of where, you know, data needs to reside within that country. So it's not as like, socialist or democratic as we wish for it to be. >> Well, that's a great point. I mean, when you just think about the clouds, the limitation, now you go out to the edge. I mean, everybody talks about the edge in IoT. Do you actually think that there's like a whole new stove pipe that's going to get created. And does that concern you, or do you think it actually is going to be, you know, connective tissue with all these clouds? >> I honestly don't know. I live in a practical world of like, how does it help me right now? How does it, you know, help me in the next five years? And mind you, in five years, things can change a lot. Because if you think back five years ago, things weren't as they are right now. I mean, I really hope that somebody out there challenges things 'cause, you know, the whole cloud promise was crazy. It was insane. Like, who came up with it? Why would I do that, right? And now I can't imagine the world without it. >> Yeah, I mean a lot of it is same wine, new bottle. You know, but a lot of it is different, right? I mean, technology keeps moving us forward, doesn't it? >> Absolutely. >> Veronika, it was great to have you. Thank you so much for your perspectives. If there was one thing that the industry could do for your data life that would make your world better, what would it be? >> I think standards for like data sharing, data marketplace. I would love, love, love nothing else to have some agreed upon standards. >> I had one other question for you, actually. I forgot to ask you this. 'Cause you were saying every company's a data company. Every company's a software company. We're already seeing it, but how prevalent do you think it will be that companies, you've seen some of it in financial services, but companies begin to now take their own data, their own tooling, their own software, which they've developed internally, and point that to the outside world? Kind of do what AWS did. You know, working backwards from the customer and saying, "Hey, we did this for ourselves. We can now do this for the rest of the world." Do you see that as a real trend, or is that Dave's pie in the sky? >> I think it's a real trend. Every company's trying to reinvent themselves and come up with new products. And every company is a data company. Every company collects data, and they're trying to figure out what to do with it. And again, it's not necessarily to sell it. Like, you don't have to sell data to monetize it. You can use it with your partners. You can exchange data. You know, you can create products. Capital One I think created a product for Snowflake pricing. I don't recall, but it just, you know, they built it for themselves, and they decided to kind of like, monetize on it. And I'm absolutely 100% on board with that. I think it's an amazing idea. >> Yeah, Goldman is another example. Nasdaq is basically taking their exchange stack and selling it around the world. And the cloud is available to do that. You don't have to build your own data center. >> Absolutely. Or for good, right? Like, we're talking about, again, we live in a capitalist country, but use data for good. We're collecting data. We're, you know, analyzing it, we're aggregating it. How can we use it for greater good for the planet? >> Veronika, thanks so much for coming to our Marlborough studios. Always a pleasure talking to you. >> Thank you so much for having me. >> You're really welcome. All right, stay tuned for more great content. From Supercloud 2, this is Dave Vellante. We'll be right back. (upbeat music)
SUMMARY :
and of course the deployment models Thank you so much. So we appreciate you sharing your depth But yeah, thank you for having me. And the cloud came along and, you know, So it was only, you know, And then you got to try I actually successfully avoided Hadoop. you know, dumping data So you can throw resources at it. And then, you know, the And you know, you and I, at the airport, to mind because, you know, That and bursting, you know, I think, you know, And that's obviously, you know, But from the use, you know, You know, and it's true they want you to ensure that, you know, you know, various systems, In the case of, you know, VMware, but not because we are, you know, and this notion of, you know, can only put, you know, which is, you know, less But versus, you know, But remember when you first found ways, Like, okay, maybe, you know, But then you lose your It happened the other day the path of where, you know, is going to be, you know, How does it, you know, help You know, but a lot of Thank you so much for your perspectives. to have some agreed upon standards. I forgot to ask you this. I don't recall, but it just, you know, And the cloud is available to do that. We're, you know, analyzing Always a pleasure talking to you. From Supercloud 2, this is Dave Vellante.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Veronika | PERSON | 0.99+ |
Veronika Durgin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
100% | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
Saks | ORGANIZATION | 0.99+ |
$80 billion | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
last August | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
M&A | ORGANIZATION | 0.99+ |
Skunkworks | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Nasdaq | ORGANIZATION | 0.98+ |
Supercloud 2 | EVENT | 0.98+ |
Earth | LOCATION | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Supercloud | EVENT | 0.98+ |
today | DATE | 0.98+ |
Snowflake Summit | EVENT | 0.98+ |
US East | LOCATION | 0.98+ |
five years ago | DATE | 0.97+ |
SQL Server | TITLE | 0.97+ |
first thing | QUANTITY | 0.96+ |
Boston | LOCATION | 0.95+ |
Black Friday | EVENT | 0.95+ |
Hadoop | TITLE | 0.95+ |
over 24 hours | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
supercloud | ORGANIZATION | 0.94+ |
one thing | QUANTITY | 0.93+ |
Moon | LOCATION | 0.93+ |
Thanksgiving | EVENT | 0.93+ |
over three | QUANTITY | 0.92+ |
one other question | QUANTITY | 0.91+ |
one cloud | QUANTITY | 0.9+ |
one area | QUANTITY | 0.9+ |
Snowflake | TITLE | 0.89+ |
multicloud | ORGANIZATION | 0.86+ |
Azure | ORGANIZATION | 0.85+ |
Supercloud 2 | ORGANIZATION | 0.83+ |
> 100% | QUANTITY | 0.82+ |
Goldman | ORGANIZATION | 0.81+ |
Snowflake | EVENT | 0.8+ |
a second | QUANTITY | 0.73+ |
several years before | DATE | 0.72+ |
this past year | DATE | 0.71+ |
second | QUANTITY | 0.7+ |
Marlborough | LOCATION | 0.7+ |
supercloud | TITLE | 0.66+ |
next five years | DATE | 0.65+ |
multicloud | TITLE | 0.59+ |
PaaS | TITLE | 0.55+ |
Dell Technologies |The Future of Multicloud Data Protection is Here 11-14
>>Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell the cube privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly we've seen this dynamic with supply chain challenges and there's a little doubt. We're also seeing it in the area of cybersecurity generally, and data recovery. Specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cybercrime. Well, security in the public clouds is certainly world class. The result of multi-cloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. >>And at the end of the day, more, not less complexity, but there's a positive side to this story. The good news is that public policy industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today there's heightened awareness that the world of data protection is not only an adjacency to, but it's becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection, people, technologies, and processes must be more tightly coordinated with security operations. Hello and welcome to the future of Multi-Cloud Data Protection Made Possible by Dell in collaboration with the Cube. My name is Dave Ante and I'll be your host today. In this segment, we welcome into the cube, two senior executives from Dell who will share details on new technology announcements that directly address these challenges. >>Jeff Boudreau is the president and general manager of Dell's Infrastructure Solutions Group, isg, and he's gonna share his perspectives on the market and the challenges he's hearing from customers. And we're gonna ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now, Jeff is gonna be joined by Travis Vhi. Travis is the senior Vice President of product management for ISG at Dell Technologies, and he's gonna give us details on the products that are being announced today and go into the hard news. Now, we're also gonna challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it. We're here with Jeff Padre and Travis Behill. We're gonna dig into the details about Dell's big data protection announcement. Guys, good to see you. Thanks >>For coming in. Good to see you. Thank you for having us. >>You're very welcome. Right. Let's start off, Jeff, with the high level, you know, I'd like to talk about the customer, what challenges they're facing. You're talking to customers all the time, What are they telling you? >>Sure. As you know, we do, we spend a lot of time with our customers, specifically listening, learning, understanding their use cases, their pain points within their specific environments. They tell us a lot. Notice no surprise to any of us, that data is a key theme that they talk about. It's one of their most important, important assets. They need to extract more value from that data to fuel their business models, their innovation engines, their competitive edge. So they need to make sure that that data is accessible, it's secure in its recoverable, especially in today's world with the increased cyber attacks. >>Okay. So maybe we could get into some of those, those challenges. I mean, when, when you talk about things like data sprawl, what do you mean by that? What should people know? Sure. >>So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of data. It's the growth of that data, which is growing at an unprecedented rates. It's the gravity of that data and the reality of the multi-cloud sprawl. So stuff is just everywhere, right? Which increases that service a tax base for cyber criminals. >>And by gravity you mean the data's there and people don't wanna move it. >>It's everywhere, right? And so when it lands someplace, I think edge, core or cloud, it's there and that's, it's something we have to help our customers with. >>Okay, so just it's nuanced cuz complexity has other layers. What are those >>Layers? Sure. When we talk to our customers, they tell us complexity is one of their big themes. And specifically it's around data complexity. We talked about that growth and gravity of the data. We talk about multi-cloud complexity and we talk about multi-cloud sprawl. So multiple vendors, multiple contracts, multiple tool chains, and none of those work together in this, you know, multi-cloud world. Then that drives their security complexity. So we talk about that increased attack surface, but this really drives a lot of operational complexity for their teams. Think about we're lack consistency through everything. So people, process, tools, all that stuff, which is really wasting time and money for our customers. >>So how does that affect the cyber strategies and the, I mean, I've often said the ciso now they have this shared responsibility model, they have to do that across multiple clouds. Every cloud has its own security policies and, and frameworks and syntax. So maybe you could double click on your perspective on that. >>Sure. I'd say the big, you know, the big challenge customers have seen, it's really inadequate cyber resiliency. And specifically they're feeling, feeling very exposed. And today as the world with cyber tax being more and more sophisticated, if something goes wrong, it is a real challenge for them to get back up and running quickly. And that's why this is such a, a big topic for CEOs and businesses around the world. >>You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses were optimized for efficiency and now they're like, wow, we have to actually put some headroom into the system to be more resilient. You know, I you hearing >>That? Yeah, we absolutely are. I mean, the customers really, they're asking us for help, right? It's one of the big things we're learning and hearing from them. And it's really about three things, one's about simplifying it, two, it is really helping them to extract more value from their data. And then the third big, big piece is ensuring their data is protected and recoverable regardless of where it is going back to that data gravity and that very, you know, the multi-cloud world just recently, I don't know if you've seen it, but the global data protected, excuse me, the global data protection index gdp. >>I, Yes. Jesus. Not to be confused with gdpr, >>Actually that was released today and confirms everything we just talked about around customer challenges, but also it highlights an importance of having a very cyber, a robust cyber resilient data protection strategy. >>Yeah, I haven't seen the latest, but I, I want to dig into it. I think this, you've done this many, many years in a row. I like to look at the, the, the time series and see how things have changed. All right. At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? >>Sure. So we believe there's a better way or a better approach on how to handle this. We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, for that cyber resilient multi-cloud data protection solution and needs. We take a modern, a simple and resilient approach. >>What does that mean? What, what do you mean by modern? >>Sure. So modern, we talk about our software defined architecture, right? It's really designed to meet the needs not only of today, but really into the future. And we protect data across any cloud and any workload. So we have a proven track record doing this today. We have more than 1700 customers that trust us to protect them more than 14 exabytes of their data in the cloud today. >>Okay, so you said modern, simple and resilient. What, what do you mean by simple? Sure. >>We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment to consumption to management and support. So our offers will deploy in minutes. They are easy to operate and use, and we support flexible consumption models for whatever customer may desire. So traditional subscription or as a service. >>And when you, when you talk about resilient, I mean, I, I put forth that premise, but it's hard because people say, Well, that's gonna gonna cost us more. Well, it may, but you're gonna also reduce your, your risk. So what's your point of view on resilience? >>Yeah, I think it's, it's something all customers need. So we're gonna be providing a comprehensive and resilient portfolio of cyber solutions that are secured by design. We have some ver some unique capabilities and a combination of things like built in amenability, physical and logical isolation. We have intelligence built in with AI par recovery. And just one, I guess fun fact for everybody is we have our cyber vault is the only solution in the industry that is endorsed by Sheltered Harbor that meets all the needs of the financial sector. >>So it's interesting when you think about the, the NIST framework for cybersecurity, it's all about about layers. You're sort of bringing that now to, to data protection, correct? Yeah. All right. In a minute we're gonna come back with Travis and dig into the news. We're gonna take a short break. Keep it right there. Okay. We're back with Jeff and Travis Vhi to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe you, before we get into the news, can you set the business context for us? What's going on out there? >>Yeah, thanks for that question, Dave. To set a little bit of the context, when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest modern simple power protect data managers software. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers and they're really telling us three things. They want simple solutions, they want us to help them modernize and they want us to add as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So tho that's the backdrop to the news today. And, and as we go through the news, I think you'll, you'll agree that each of these announcements deliver on those pillars. And in particular today we're announcing the Power Protect data manager appliance. We are announcing power protect cyber recovery enhancements, and we are announcing enhancements to our Apex data storage >>Services. Okay, so three pieces. Let's, let's dig to that. It's interesting appliance, everybody wants software, but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, right? It performs great. So, so what do we need to know about the appliance? What's the news there? Well, >>You know, part of the reason I gave you some of those stats to begin with is that we have this strong foundation of, of experience, but also intellectual property components that we've taken that have been battle tested in the market. And we've put them together in a new simple integrated appliance that really combines the best of the target appliance capabilities we have with that modern simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy to use and easy to scale interface for customers. >>So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is that, that that data protection is becoming an extension of your, your cybersecurity strategies. So I'm interested in your perspective on cyber recovery, you specific news that you have there. >>Yeah, you know, we, we are, in addition to simplifying things via the, the appliance, we are providing solutions for customers no matter where they're deploying. And cyber recovery, especially when it comes to cloud deployments, is an increasing area of interest and deployment that we see with our customers. So what we're announcing today is that we're expanding our cyber recovery services to be available in Google Cloud with this announcement. It means we're available in all three of the major clouds and it really provides customers the flexibility to secure their data no matter if they're running, you know, on premises in a colo at the edge in the public cloud. And the other nice thing about this, this announcement is that you have the ability to use Google Cloud as a cyber recovery vault that really allows customers to isolate critical data and they can recover that critical data from the vault back to on premises or from that vault back to running their cyber cyber protection or their data protection solutions in the public cloud. >>I always invoke my, my favorite Matt Baker here. It's not a zero sum game, but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We can talk about edge all day, but that's a different topic. Okay, so my, my other question Travis, is how does this all fit into Apex? We hear a lot about Apex as a service, it's sort of the new hot thing. What's happening there? What's the news around Apex? >>Yeah, we, we've seen incredible momentum with our Apex solutions since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being, you know, providing solutions that allow customers to consume flexibly. And so what we're announcing specifically is that we're expanding Apex data storage services to include a data protection option. And it's like with all Apex offers, it's a pay as you go solution really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is, you know, specify their base capacity, they specify their performance tier, they tell us do they want a a one year term or a three year term and we take it from there. We, we get them up and running so they can start deploying and consuming flexibly. And it's, as with many of our Apex solutions, it's a simple user experience all exposed through a unified Apex console. >>Okay. So it's you keeping it simple, like I think large, medium, small, you know, we hear a lot about t-shirt sizes. I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, what I, what I need, how different is this? I wonder if you guys could, could, could address this. Jeff, maybe you can, >>You can start. Sure. I'll start and then pitch me, you know, Travis, you you jump in when I screw up here. So, awesome. So first I'd say we offer innovative multi-cloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis and all the major public clouds. We have a broad ecosystem of workload support and I guess the, the great news is we're up to 80% more cost effective than any of the competition. >>80%. 80%, That's a big number, right? Travis, what's your point of view on this? Yeah, >>I, I think number one, end to end data protection. We, we are that one stop shop that I talked about. Whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the edge, whether it's integrated appliances, target appliances, software, we have solutions that span the gamut as a service. I mentioned the Apex solution as well. So really we can, we can provide solutions that help support customers and protect them, any workload, any cloud, anywhere that data lives edge core to cloud. The other thing that we hear as a, as a, a big differentiator for Dell and, and Jeff touched on on this a little bit earlier, is our intelligent cyber resiliency. We have a unique combination in, in the market where we can offer immutability or protection against deletion as, as sort of that first line of defense. But we can also offer a second level of defense, which is isolation, talking, talking about data vaults or cyber vaults and cyber recovery. And the, at more importantly, the intelligence that goes around that vault. It can look at detecting cyber attacks, it can help customers speed time to recovery and really provides AI and ML to help early diagnosis of a cyber attack and fast recovery should a cyber attack occur. And, and you know, if you look at customer adoption of that solution specifically in the clouds, we have over 1300 customers utilizing power protect cyber recovery. >>So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, you know, your finance team, Michael Dell, et cetera, that end to end capability that that, that your ability to manage throughout the supply chain. We actually just did a a, an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is you, in a lot of respects, you're shifting, you know, the client's burden to your r and d now they have a lot of work to do, so it's, it's not like they can go home and just relax, but, but that's a key part of the partnership that I see. Jeff, I wonder if you could give us the, the, the final thoughts. >>Sure. Dell has a long history of being a trusted partner with it, right? So we have unmatched capabilities. Going back to your point, we have the broadest portfolio, we have, you know, we're a leader in every category that we participate in. We have a broad deep breadth of portfolio. We have scale, we have innovation that is just unmatched within data protection itself. We have the trusted market leader, no, if and or buts, we're number one for both data protection software in appliances per idc and we would just name for the 17th consecutive time the leader in the, the Gartner Magic Quadrant. So bottom line is customers can count on Dell. >>Yeah, and I think again, we're seeing the evolution of, of data protection. It's not like the last 10 years, it's really becoming an adjacency and really a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for Thank you sir. Thanks Travis. Good to see you. All right, in a moment I'm gonna come right back and summarize what we learned today, what actions you can take for your business. You're watching the future of multi-cloud data protection made possible by Dell and collaboration with the cube, your leader in enterprise and emerging tech coverage right back >>In our data driven world. Protecting data has never been more critical to guard against everything from cyber incidents to unplanned outages. You need a cyber resilient, multi-cloud data protection strategy. >>It's not a matter of if you're gonna get hacked, it's a matter of when. And I wanna know that I can recover and continue to recover each day. >>It is important to have a cyber security and a cyber resiliency plan in place because the threat of cyber attack are imminent. >>Power protects. Data manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted partner and market leader. >>We chose Power Protect Data Manager because we've been a strategic partner with Dell Technologies for roughly 20 years now. Our partnership with Dell Technologies has provided us with the ability to scale and grow as we've transitioned from 10 billion in assets to 20 billion. >>With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >>Got installed it by myself, learned it by myself with very intuitive >>While restoring a machine with Power Protect Data Manager is fast. We can fully manage Power Protect through the center. We can recover a whole machine in seconds. >>Data Manager offers innovation such as Transparent snapshots to simplify virtual machine backups and it goes beyond backup and restore to provide valuable insights and to protected data workloads and VMs. >>In our previous environment, it would take anywhere from three to six hours at night to do a single backup of each vm. Now we're backing up hourly and it takes two to three seconds with the transparent snapshots. >>With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >>Data is extremely important. We can't afford to lose any data. We need things just to work. >>Start your journey to modern data protection with Dell Power Protect Data manager. Visit dell.com/power Protect Data Manager. >>We put forth the premise in our introduction that the worlds of data protection in cybersecurity must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default this should include modern hardware and software. Now in addition to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace that specifically address these customer concerns. There were three that we talked about today. First, the Power Protect Data Manager Appliance, which is an integrated system taking advantage of Dell's history in data protection, but adding new capabilities. And I want to come back to that in the moment. Second is Dell's Power Protect cyber recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and and Azure support. >>Now finally, Dell has made its target backup appliances available in Apex. You might recall earlier this year we saw the introduction from Dell of Apex backup services and then in May at Dell Technologies world, we heard about the introduction of Apex Cyber Recovery Services. And today Dell is making its most popular backup appliances available and Apex. Now I wanna come back to the Power Protect data manager appliance because it's a new integrated appliance. And I asked Dell off camera really what is so special about these new systems and what's really different from the competition because look, everyone offers some kind of integrated appliance. So I heard a number of items, Dell talked about simplicity and efficiency and containers and Kubernetes. So I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. >>Dell claims that customers can deploy the system in half the time relative to the competition. So we're talking minutes to deploy and of course that's gonna lead to much simpler management. And the second real difference I heard was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected, which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market and claims that in its benchmarks the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now this is based on Dell benchmarks, so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information, go to the data protectionPage@dell.com. You can find that at dell.com/data protection. And all the content here and other videos are available on demand@thecube.net. Check out our series on the blueprint for trusted infrastructure, it's related and has some additional information. And go to silicon angle.com for all the news and analysis related to these and other announcements. This is Dave Valante. Thanks for watching the future of multi-cloud protection made possible by Dell in collaboration with the Cube, your leader in enterprise and emerging tech coverage.
SUMMARY :
And the lack of that business And at the end of the day, more, not less complexity, Jeff Boudreau is the president and general manager of Dell's Infrastructure Solutions Group, Good to see you. Let's start off, Jeff, with the high level, you know, I'd like to talk about the So they need to make sure that that data data sprawl, what do you mean by that? So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts it's something we have to help our customers with. Okay, so just it's nuanced cuz complexity has other layers. We talked about that growth and gravity of the data. So how does that affect the cyber strategies and the, And today as the world with cyber tax being more and more sophisticated, You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses that very, you know, the multi-cloud world just recently, I don't know if you've seen it, but the global data protected, Not to be confused with gdpr, Actually that was released today and confirms everything we just talked about around customer challenges, At, at a high level, Jeff, can you kind of address why Dell and from your point of We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, It's really designed to meet the needs What, what do you mean by simple? We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment So what's your point of view on resilience? Harbor that meets all the needs of the financial sector. So it's interesting when you think about the, the NIST framework for cybersecurity, it's all about about layers. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, You know, part of the reason I gave you some of those stats to begin with is that we have this strong foundation of, So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is an increasing area of interest and deployment that we see with our customers. it's sort of the new hot thing. All a customer really needs to do is, you know, specify their base capacity, I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, We support as Travis and all the major public clouds. Travis, what's your point of view on of that solution specifically in the clouds, So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, We have the trusted market leader, no, if and or buts, we're number one for both data protection software in what we learned today, what actions you can take for your business. Protecting data has never been more critical to guard against that I can recover and continue to recover each day. It is important to have a cyber security and a cyber resiliency Data manager from Dell Technologies helps deliver the data protection and security We chose Power Protect Data Manager because we've been a strategic partner with With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency We can fully manage Power Data Manager offers innovation such as Transparent snapshots to simplify virtual Now we're backing up hourly and it takes two to three seconds with the transparent With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available We need things just to work. Start your journey to modern data protection with Dell Power Protect Data manager. We put forth the premise in our introduction that the worlds of data protection in cybersecurity So I kind of kept pushing and got to what I think is the heart of the matter in two really Dell claims that customers can deploy the system in half the time relative to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
Travis | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
10 billion | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Travis Behill | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
demand@thecube.net | OTHER | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
20 billion | QUANTITY | 0.99+ |
Dave Ante | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Jeff Padre | PERSON | 0.99+ |
Sheltered Harbor | ORGANIZATION | 0.99+ |
Matt Baker | PERSON | 0.99+ |
more than 1700 customers | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
Second | QUANTITY | 0.99+ |
1700 customers | QUANTITY | 0.99+ |
more than 14 exabytes | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two senior executives | QUANTITY | 0.99+ |
three seconds | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Apex | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
three pieces | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
six hours | QUANTITY | 0.99+ |
each day | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
over 1300 customers | QUANTITY | 0.98+ |
Solutions Group | ORGANIZATION | 0.98+ |
three things | QUANTITY | 0.98+ |
dell.com/power | OTHER | 0.98+ |
Jesus | PERSON | 0.98+ |
Gartner | ORGANIZATION | 0.98+ |
thousands of people | QUANTITY | 0.97+ |
Introduction The Future of Multicloud Data Protection is Here 11-14
>>Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell the cube privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly we've seen this dynamic with supply chain challenges and there's a little doubt. We're also seeing it in the area of cybersecurity generally, and data recovery. Specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cybercrime. Well, security in the public clouds is certainly world class. The result of multi-cloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. >>And at the end of the day, more, not less complexity, but there's a positive side to this story. The good news is that public policy industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today there's heightened awareness that the world of data protection is not only an adjacency to, but it's becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection, people, technologies, and processes must be more tightly coordinated with security operations. Hello and welcome to the future of Multi-Cloud Data Protection Made Possible by Dell in collaboration with the Cube. My name is Dave Ante, and I'll be your host today In this segment, we welcome into the Cube, two senior executives from Dell who will share details on new technology announcements that directly address these challenges. >>Jeff Boudreau is the president and general manager of Dell's Infrastructure Solutions Group, isg, and he's gonna share his perspectives on the market and the challenges he's hearing from customers. And we're gonna ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now, Jeff is gonna be joined by Travis Vhi. Travis is the Senior Vice President of product management for ISG at Dell Technologies, and he's gonna give us details on the products that are being announced today and go into the hard news. Now, we're also gonna challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it.
SUMMARY :
And the lack of that And at the end of the day, more, not less complexity, Jeff Boudreau is the president and general manager of Dell's Infrastructure Solutions Group,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Travis | PERSON | 0.99+ |
Dave Ante | PERSON | 0.99+ |
two senior executives | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.98+ |
Solutions Group | ORGANIZATION | 0.98+ |
ISG | ORGANIZATION | 0.97+ |
Cube | COMMERCIAL_ITEM | 0.96+ |
pandemic | EVENT | 0.95+ |
Dell Technologies | ORGANIZATION | 0.94+ |
11-14 | DATE | 0.77+ |
past 30 plus months | DATE | 0.63+ |
isg | ORGANIZATION | 0.63+ |
Cube | ORGANIZATION | 0.62+ |
The Future of Multicloud Data Protection is Here FULL EPISODE V3
>>Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell the cube privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly we've seen this dynamic with supply chain challenges and there's a little doubt. We're also seeing it in the area of cybersecurity generally, and data recovery. Specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cyber crime. Well, security in the public clouds is certainly world class. The result of multi-cloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. >>And at the end of the day, more, not less complexity, but there's a positive side to this story. The good news is that public policy industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today there's heightened awareness that the world of data protection is not only an adjacency to, but it's becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection, people, technologies, and processes must be more tightly coordinated with security operations. Hello and welcome to the future of Multi-Cloud Data Protection Made Possible by Dell in collaboration with the Cube. My name is Dave Valante and I'll be your host today. In this segment, we welcome into the Cube, two senior executives from Dell who will share details on new technology announcements that directly address these challenges. >>Jeff Boudreaux is the president and general manager of Dell's Infrastructure Solutions Group, isg, and he's gonna share his perspectives on the market and the challenges he's hearing from customers. And we're gonna ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now Jeff is gonna be joined by Travis Vhi. Travis is the senior Vice President of product management for ISG at Dell Technologies, and he's gonna give us details on the products that are being announced today and go into the hard news. Now, we're also gonna challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it. We're here with Jeff Padro and Travis Behill. We're gonna dig into the details about Dell's big data protection announcement. Guys, good to see you. Thanks >>For coming in. Good to see you. Thank you for having us. >>You're very welcome. Right. Let's start off, Jeff, with a high level, you know, I'd like to talk about the customer, what challenges they're facing. You're talking to customers all the time, What are they telling you? >>Sure. As you know, we do, we spend a lot of time with our customers, specifically listening, learning, understanding their use cases, their pain points within their specific environments. They tell us a lot. Notice no surprise to any of us, that data is a key theme that they talk about. It's one of their most important, important assets. They need to extract more value from that data to fuel their business models, their innovation engines, their competitive edge. So they need to make sure that that data is accessible, it's secure in its recoverable, especially in today's world with the increased cyber attacks. >>Okay. So maybe we could get into some of those, those challenges. I mean, when, when you talk about things like data sprawl, what do you mean by that? What should people know? Sure. >>So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of data. It's the growth of that data, which is growing at an unprecedented rates. It's the gravity of that data and the reality of the multi-cloud sprawl. So stuff is just everywhere, right? Which increases that service a tax base for cyber criminals. >>And and by gravity you mean the data's there and people don't wanna move it. >>It's everywhere, right? And so when it lands someplace, I think edge, core or cloud, it's there and that's, it's something we have to help our customers with. >>Okay, so just it's nuanced cuz complexity has other layers. What, what are those >>Layers? Sure. When we talk to our customers, they tell us complexity is one of their big themes. And specifically it's around data complexity. We talked about that growth and gravity of the data. We talk about multi-cloud complexity and we talk about multi-cloud sprawl. So multiple vendors, multiple contracts, multiple tool chains, and none of those work together in this, you know, multi-cloud world. Then that drives their security complexity. So we talk about that increased attack surface, but this really drives a lot of operational complexity for their teams. Think about we're a lack consistency through everything. So people, process, tools, all that stuff, which is really wasting time and money for our customers. >>So how does that affect the cyber strategies and the, I mean, I've often said the ciso now they have this shared responsibility model, they have to do that across multiple clouds. Every cloud has its own security policies and, and frameworks and syntax. So maybe you could double click on your perspective on that. >>Sure. I'd say the big, you know, the big challenge customers have seen, it's really inadequate cyber resiliency. And specifically they're feeling, feeling very exposed. And today as the world with cyber tax being more and more sophisticated, if something goes wrong, it is a real challenge for them to get back up and running quickly. And that's why this is such a, a big topic for CEOs and businesses around the world. >>You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses were optimized for efficiency and now they're like, Wow, we have to actually put some headroom into the system to be more resilient. You know, I you hearing >>That? Yeah, we absolutely are. I mean, the customers really, they're asking us for help, right? It's one of the big things we're learning and hearing from them. And it's really about three things, one's about simplifying it, two, it's really helping them to extract more value from their data. And then the third big, big piece is ensuring their data is protected and recoverable regardless of where it is going back to that data gravity and that very, you know, the multicloud world just recently, I don't know if you've seen it, but the global data protected, excuse me, the global data protection index gdp. >>I, Yes. Jesus. Not to be confused with gdpr, >>Actually that was released today and confirms everything we just talked about around customer challenges, but also it highlights an importance of having a very cyber, a robust cyber resilient data protection strategy. >>Yeah, I haven't seen the latest, but I, I want to dig into it. I think this is, you've done this many, many years in a row. I like to look at the, the, the time series and see how things have changed. All right. At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? >>Sure. So we believe there's a better way or a better approach on how to handle this. We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, for that cyber resilient multi-cloud data protection solution in needs. We take a modern, a simple and resilient approach, >>But what does that mean? What, what do you mean by modern? >>Sure. So modern, we talk about our software defined architecture, right? It's really designed to meet the needs not only of today, but really into the future. And we protect data across any cloud in any workload. So we have a proven track record doing this today. We have more than 1700 customers that trust us to protect them more than 14 exabytes of their data in the cloud today. >>Okay, so you said modern, simple and resilient. What, what do you mean by simple? Sure. >>We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment to consumption to management and support. So our offers will deploy in minutes. They are easy to operate and use, and we support flexible consumption models for whatever the customer may desire. So traditional subscription or as a service. >>And when you, when you talk about resilient, I mean, I, I put forth that premise, but it's hard because people say, Well, that's gonna gonna cost us more. Well, it may, but you're gonna also reduce your, your risk. So how, what's your point of view on resilience? >>Yeah, I think it's, it's something all customers need. So we're gonna be providing a comprehensive and resilient portfolio of cyber solutions that are secured by design. We have some ver some unique capabilities in a combination of things like built in amenability, physical and logical isolation. We have intelligence built in with AI par recovery and just one, I guess fun fact for everybody is we have our cyber vault is the only solution in the industry that is endorsed by Sheltered Harbor that meets all the needs of the financial sector. >>So it's interesting when you think about the, the NIST framework for cyber security, it's all about about layers. You're sort of bringing that now to, to data protection, correct? Yeah. All right. In a minute we're gonna come back with Travis and dig into the news. We're gonna take a short break. Keep it right there. Okay. We're back with Jeff and Travis Vehill to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe you, before we get into the news, can you set the business context for us? What's going on out there? >>Yeah, thanks for that question, Dave. To set a little bit of the context, when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest modern simple power protect data managers software. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers and they're really telling us three things. They want simple solutions, they want us to help them modernize and they want us to add as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So tho that's the backdrop to the news today. And, and as we go through the news, I think you'll, you'll agree that each of these announcements deliver on those pillars. And in particular today we're announcing the Power Protect data manager appliance. We are announcing power protect cyber recovery enhancements, and we are announcing enhancements to our Apex data storage >>Services. Okay, so three pieces. Let's, let's dig to that. It's interesting appliance, everybody wants software, but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, right? Performs great. So, so what do we need to know about the appliance? What's the news there? Well, >>You know, part of the reason I gave you some of those stats to begin with is that we have at this strong foundation of, of experience, but also intellectual property components that we've taken that have been battle tested in the market. And we've put them together in a new simple integrated appliance that really combines the best of the target appliance capabilities we have with that modern simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy to use and easy to scale interface for customers. >>So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is that, that that data protection is becoming an extension of your, your cybersecurity strategies. So I'm interested in your perspective on cyber recovery. You, you have specific news that you have there? >>Yeah, you know, we, we are, in addition to simplifying things via the, the appliance, we are providing solutions for customers no matter where they're deploying. And cyber recovery, especially when it comes to cloud deployments, is an increasing area of interest and deployment that we see with our customers. So what we're announcing today is that we're expanding our cyber recovery services to be available in Google Cloud with this announcement. It means we're available in all three of the major clouds and it really provides customers the flexibility to secure their data no matter if they're running, you know, on premises in a colo at the edge in the public cloud. And the other nice thing about this, this announcement is that you have the ability to use Google Cloud as a cyber recovery vault that really allows customers to isolate critical data and they can recover that critical data from the vault back to on-premises or from that vault back to running their cyber cyber protection or their data protection solutions in the public cloud. >>I always invoke my, my favorite Matt Baker here. It's not a zero sum game, but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We could talk about edge all day, but that's a different topic. Okay, so Mike, my other question Travis, is how does this all fit into Apex? We hear a lot about Apex as a service, it's sort of the new hot thing. What's happening there? What's the news around Apex? >>Yeah, we, we've seen incredible momentum with our Apex solutions since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being, you know, providing solutions that allow customers to consume flexibly. And so what we're announcing specifically is that we're expanding Apex data storage services to include a data protection option. And it's like with all Apex offers, it's a pay as you go solution really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is, you know, specify their base capacity, they specify their performance tier, they tell us do they want a a one year term or a three year term and we take it from there. We, we get them up and running so they can start deploying and consuming flexibly. And it's, as with many of our Apex solutions, it's a simple user experience all exposed through a unified Apex console. >>Okay. So it's you keeping it simple, like I think large, medium, small, you know, we hear a lot about t-shirt sizes. I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, what I, what I need, how different is this? I wonder if you guys could, could, could address this. Jeff, maybe you can, >>You can start. Sure. I'll start and then pitch me, you know, Travis, you you jump in when I screw up here. So, awesome. So first I'd say we offer innovative multi-cloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis at all the major public clouds. We have a broad ecosystem of workload support and I guess the, the great news is we're up to 80% more cost effective than any of the competition. >>80%. 80%, That's a big number, right. Travis, what's your point of view on this? Yeah, >>I, I think number one, end to end data protection. We, we are that one stop shop that I talked about. Whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the edge, whether it's integrated appliances, target appliances, software, we have solutions that span the gamut as a service. I mentioned the Apex solution as well. So really we can, we can provide solutions that help support customers and protect them, any workload, any cloud, anywhere that data lives edge core to cloud. The other thing that we hear as a, as a, a big differentiator for Dell and, and Jeff touched on on this a little bit earlier, is our intelligent cyber resiliency. We have a unique combination in, in the market where we can offer immutability or protection against deletion as, as sort of that first line of defense. But we can also offer a second level of defense, which is isolation, talking, talking about data vaults or cyber vaults and cyber recovery. And the, at more importantly, the intelligence that goes around that vault. It can look at detecting cyber attacks, it can help customers speed time to recovery and really provides AI and ML to help early diagnosis of a cyber re attack and fast recovery should a cyber attack occur. And, and you know, if you look at customer adoption of that solution specifically in the clouds, we have over 1300 customers utilizing power protect cyber recovery. >>So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, you know, your finance team, Michael Dell, et cetera, that end to end capability that that, that your ability to manage throughout the supply chain. We actually just did a a, an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is you, in a lot of respects, you're shifting, you know, the client's burden to your r and d now they have a lot of work to do, so it's, it's not like they can go home and just relax, but, but that's a key part of the partnership that I see. Jeff, I wonder if you could give us the, the, the final thoughts. >>Sure. Dell has a long history of being a trusted partner with it, right? So we have unmatched capabilities. Going back to your point, we have the broadest portfolio, we have, you know, we're a leader in every category that we participate in. We have a broad deep breadth of portfolio. We have scale, we have innovation that is just unmatched within data protection itself. We are the trusted market leader, no if and or bots, we're number one for both data protection software in appliances per idc. And we would just name for the 17th consecutive time the leader in the, the Gartner Magic Quadrant. So bottom line is customers can count on Dell. >>Yeah, and I think again, we're seeing the evolution of, of data protection. It's not like the last 10 years, it's really becoming an adjacency and really a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for Thank you sir. Thanks Dave. Travis, good to see you. All right, in a moment I'm gonna come right back and summarize what we learned today, what actions you can take for your business. You're watching the future of multi-cloud data protection made possible by Dell and collaboration with the cube, your leader in enterprise and emerging tech coverage right back >>In our data driven world. Protecting data has never been more critical to guard against everything from cyber incidents to unplanned outages. You need a cyber resilient, multi-cloud data protection strategy. >>It's not a matter of if you're gonna get hacked, it's a matter of when. And I wanna know that I can recover and continue to recover each day. >>It is important to have a cyber security and a cyber resiliency plan in place because the threat of cyber attack are imminent. >>Power protects. Data manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted partner and market leader. >>We chose Power Protect Data Manager because we've been a strategic partner with Dell Technologies for roughly 20 years now. Our partnership with Dell Technologists has provided us with the ability to scale and grow as we've transitioned from 10 billion in assets to 20 billion. >>With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >>Got installed it by myself, learned it by myself with very intuitive >>While restoring a machine with Power Protect Data Manager is fast. We can fully manage Power Protect through the center. We can recover a whole machine in seconds. >>Data Manager offers innovation such as Transparent snapshots to simplify virtual machine backups and it goes beyond backup and restore to provide valuable insights and to protected data workloads and VMs. >>In our previous environment, it would take anywhere from three to six hours at night to do a single backup of each vm. Now we're backing up hourly and it takes two to three seconds with the transparent snapshots. >>With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >>Data is extremely important. We can't afford to lose any data. We need things just to work. >>Start your journey to modern data protection with Dell Power Protect Data manager. Visit dell.com/power Protect Data Manager. >>We put forth the premise in our introduction that the world's of data protection in cybersecurity must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default this should include modern hardware and software. Now in addition to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace that specifically address these customer concerns. There were three that we talked about today. First, the Power Protect Data Manager Appliance, which is an integrated system taking advantage of Dell's history in data protection, but adding new capabilities. And I want to come back to that in the moment. Second is Dell's Power Protect cyber recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and and Azure support. >>Now finally, Dell has made its target backup appliances available in Apex. You might recall earlier this year we saw the introduction from Dell of Apex backup services and then in May at Dell Technologies world, we heard about the introduction of Apex Cyber Recovery Services. And today Dell is making its most popular backup appliances available and Apex. Now I wanna come back to the Power Protect data manager appliance because it's a new integrated appliance. And I asked Dell off camera really what is so special about these new systems and what's really different from the competition because look, everyone offers some kind of integrated appliance. So I heard a number of items, Dell talked about simplicity and efficiency and containers and Kubernetes. So I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. >>Dell claims that customers can deploy the system in half the time relative to the competition. So we're talking minutes to deploy and of course that's gonna lead to much simpler management. And the second real difference I heard was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected, which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market and claims that in its benchmarks the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now this is based on Dell benchmarks, so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information, go to the data protectionPage@dell.com. You can find that at dell.com/data protection. And all the content here and other videos are available on demand@thecube.net. Check out our series on the blueprint for trusted infrastructure, it's related and has some additional information. And go to silicon angle.com for all the news and analysis related to these and other announcements. This is Dave Valante. Thanks for watching the future of multi-cloud protection made possible by Dell in collaboration with the Cube, your leader in enterprise and emerging tech coverage.
SUMMARY :
And the lack of that business And at the end of the day, more, not less complexity, Jeff Boudreaux is the president and general manager of Dell's Infrastructure Solutions Group, Good to see you. Let's start off, Jeff, with a high level, you know, I'd like to talk about the So they need to make sure that that data data sprawl, what do you mean by that? So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of it's something we have to help our customers with. What, what are those We talked about that growth and gravity of the data. So how does that affect the cyber strategies and the, And today as the world with cyber tax being more and more sophisticated, You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses that very, you know, the multicloud world just recently, I don't know if you've seen it, but the global data protected, Not to be confused with gdpr, Actually that was released today and confirms everything we just talked about around customer challenges, At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, It's really designed to meet the needs What, what do you mean by simple? We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment So how, what's your point of view on resilience? Harbor that meets all the needs of the financial sector. So it's interesting when you think about the, the NIST framework for cyber security, it's all about about layers. the context, when you look at the data protection market, Dell has been a leader in providing solutions but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, You know, part of the reason I gave you some of those stats to begin with is that we have at this strong foundation of, So the premise that I've been putting forth for, you know, months now, probably well, well over a year, it really provides customers the flexibility to secure their data no matter if they're running, you know, it's sort of the new hot thing. All a customer really needs to do is, you know, specify their base capacity, I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, We support as Travis at all the major public clouds. Travis, what's your point of view on of that solution specifically in the clouds, So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, We are the trusted market leader, no if and or bots, we're number one for both data protection software in what we learned today, what actions you can take for your business. Protecting data has never been more critical to guard against that I can recover and continue to recover each day. It is important to have a cyber security and a cyber resiliency Data manager from Dell Technologies helps deliver the data protection and security We chose Power Protect Data Manager because we've been a strategic partner with With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency We can fully manage Power Data Manager offers innovation such as Transparent snapshots to simplify virtual Now we're backing up hourly and it takes two to three seconds with the transparent With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available We need things just to work. Start your journey to modern data protection with Dell Power Protect Data manager. We put forth the premise in our introduction that the world's of data protection in cybersecurity So I kind of kept pushing and got to what I think is the heart of the matter in two really Dell claims that customers can deploy the system in half the time relative to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Jeff Boudreaux | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Travis | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
20 billion | QUANTITY | 0.99+ |
Travis Behill | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Jeff Padro | PERSON | 0.99+ |
10 billion | QUANTITY | 0.99+ |
Matt Baker | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Sheltered Harbor | ORGANIZATION | 0.99+ |
Travis Vehill | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
second | QUANTITY | 0.99+ |
demand@thecube.net | OTHER | 0.99+ |
May | DATE | 0.99+ |
more than 14 exabytes | QUANTITY | 0.99+ |
more than 1700 customers | QUANTITY | 0.99+ |
1700 customers | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
two senior executives | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
three pieces | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two parts | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
six hours | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
three seconds | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
over 1300 customers | QUANTITY | 0.99+ |
Solutions Group | ORGANIZATION | 0.99+ |
Apex | ORGANIZATION | 0.98+ |
three things | QUANTITY | 0.98+ |
500 virtual machines | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
20 years | QUANTITY | 0.98+ |
80% | QUANTITY | 0.98+ |
The Future of Multicloud Data Protection is Here FULL EPISODE V1
>> Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell theCUBE privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly, we've seen this dynamic with supply chain challenges. And there's a little doubt we're also seeing it in the area of cybersecurity generally, and data recovery specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cybercrime. While security in the public cloud is certainly world class, the result of multicloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. And at the end of the day, more, not less, . But there's a positive side to this story. The good news is that public policy, industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today, there's heightened awareness that the world of data protection is not only an adjacency to, but is becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection people, technologies and processes must be more tightly coordinated with security operations. Hello, and welcome to "The Future of Multicloud Data Protection" made possible by Dell in collaboration with theCUBE. My name is Dave Vellante and I'll be your host today. In this segment, we welcome into theCUBE two senior executives from Dell who will share details on new technology announcements that directly address these challenges. Jeff Boudreau is the President and General Manager of Dell's Infrastructure Solutions Group, ISG, and he's going to share his perspectives on the market and the challenges he's hearing from customers. And we're going to ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now, Jeff is going to be joined by Travis Vigil. Travis is the Senior Vice-President of Product Management for ISG at Dell Technologies, and he's going to give us details on the products that are being announced today and go into the hard news. Now, we're also going to challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it. (upbeat music) We're here with Jeff Boudreau and Travis Vigil, and we're going to dig into the details about Dell's big data protection announcement. Guys, good to see you. Thanks for coming in. >> Good to see you. Thank you for having us. >> You're very welcome. Alright, let's start off Jeff, with the high level. You know, I'd like to talk about the customer, what challenges they're facing? You're talking to customers all the time. What are they telling you? >> Sure, as you know, we spend a lot of time with our customers, specifically listening, learning, understanding their use cases, their pain points within their specific environments. They tell us a lot. No surprise to any of us that data is a key theme that they talk about. It's one of their most important assets. They need to extract more value from that data to fuel their business models, their innovation engines, their competitive edge. So, they need to make sure that that data is accessible, it's secure and its recoverable, especially in today's world with the increased cyber attacks. >> Okay, so maybe we could get into some of those challenges. I mean, when you talk about things like data sprawl, what do you mean by that? What should people know? >> Sure, so for those big three themes, I'd say, you have data sprawl, which is the big one, which is all about the massive amounts of data. It's the growth of that data, which is growing at unprecedented rates. It's the gravity of that data and the reality of the multicloud sprawl. So stuff is just everywhere, right? Which increases that surface as attack space for cyber criminals. >> And by gravity, you mean the data's there and people don't want to move it. >> It's everywhere, right? And so when it lands someplace, think Edge, Core or Cloud, it's there. And it's something we have to help our customers with. >> Okay, so it's nuanced 'cause complexity has other layers. What are those layers? >> Sure. When we talk to our customers, they tell us complexity is one of their big themes. And specifically it's around data complexity. We talked about that growth and gravity of the data. We talk about multicloud complexity and we talk about multicloud sprawl. So multiple vendors, multiple contracts, multiple tool chains, and none of those work together in this multicloud world. Then that drives their security complexity. So, we talk about that increased attack surface. But this really drives a lot of operational complexity for their teams. Think about we're lacking consistency through everything. So people, process, tools, all that stuff, which is really wasting time and money for our customers. >> So, how does that affect the cyber strategies and the, I mean, I've often said the Cisco, now they have this shared responsibility model. They have to do that across multiple clouds. Every cloud has its own security policies and frameworks and syntax. So, maybe you could double click on your perspective on that. >> Sure. I'd say the big challenge customers have seen, it's really inadequate cyber resiliency and specifically, they're feeling very exposed. And today as the world with cyber attacks being more and more sophisticated, if something goes wrong, it is a real challenge for them to get back up and running quickly. And that's why this is such a big topic for CEOs and businesses around the world. You know, it's funny. I said this in my open. I think that prior to the pandemic businesses were optimized for efficiency, and now they're like, "Wow, we have to actually put some headroom into the system to be more resilient." You know, are you hearing that? >> Yeah, we absolutely are. I mean, the customers really, they're asking us for help, right? It's one of the big things we're learning and hearing from them. And it's really about three things. One's about simplifying IT. Two, it's really helping them to extract more value from their data. And then the third big piece is ensuring their data is protected and recoverable regardless of where it is going back to that data gravity and that very, you know, the multicloud world. Just recently, I don't know if you've seen it, but the Global Data Protected, excuse me, the Global Data Protection Index. >> GDPI. >> Yes. Jesus. >> Not to be confused with GDPR. >> Actually, that was released today and confirms everything we just talked about around customer challenges. But also it highlights at an importance of having a very cyber, a robust cyber resilient data protection strategy. >> Yeah, I haven't seen the latest, but I want to dig into it. I think this, I've done this many, many years in a row. I'd like to look at the time series and see how things have changed. All right. At a high level, Jeff, can you kind of address why Dell, from your point of view is best suited? >> Sure. So, we believe there's a better way or a better approach on how to handle this. We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, for that cyber resilient multicloud data protection solution and needs. We take a modern, a simple and resilient approach. >> What does that mean? What do you mean by modern? >> Sure. So modern, we talk about our software defined architecture. Right? It's really designed to meet the needs not only of today, but really into the future. And we protect data across any cloud and any workload. So, we have a proven track record doing this today. We have more than 1,700 customers that trust us to protect more than 14 exabytes of their data in the cloud today. >> Okay, so you said modern, simple and resilient. What do you mean by simple? >> Sure. We want to provide simplicity everywhere, going back to helping with the complexity challenge. And that's from deployment to consumption, to management and support. So, our offers will deploy in minutes. They are easy to operate and use, and we support flexible consumption models for whatever the customer may desire. So, traditional subscription or as a service. >> And when you talk about resilient, I mean, I put forth that premise, but it's hard because people say, "Well, that's going to cost us more. Well, it may, but you're going to also reduce your risk." So, what's your point of view on resilience? >> Yeah, I think it's something all customers need. So, we're going to be providing a comprehensive and resilient portfolio of cyber solutions that are secure by design. And we have some unique capabilities and a combination of things like built in immutability, physical and logical isolation. We have intelligence built in with AI part recovery. And just one, I guess fun fact for everybody is we have, our cyber vault is the only solution in the industry that is endorsed by Sheltered Harbor that meets all the needs of the financial sector. >> So it's interesting when you think about the NIST framework for cybersecurity. It's all about about layers. You're sort of bringing that now to data protection. >> Jeff: Correct. Yeah. >> All right. In a minute, we're going to come back with Travis and dig into the news. We're going to take a short break. Keep it right there. (upbeat music) (upbeat adventurous music) Okay, we're back with Jeff and Travis Vigil to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe you, before we get into the news, can you set the business context for us? What's going on out there? >> Yeah. Thanks for that question, Dave. To set a little bit of the context, when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest modern, simple PowerProtect Data Manager Software. And as Jeff mentioned, we have, 1,700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers and they're really telling us three things. They want simple solutions. They want us to help them modernize. And they want us to add as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So, that's the backdrop to the news today. And as we go through the news, I think you'll agree that each of these announcements deliver on those pillars. And in particular, today we're announcing the PowerProtect Data Manager Appliance. We are announcing PowerProtect Cyber Recovery Enhancements, and we are announcing enhancements to our APEX Data Storage Services. >> Okay, so three pieces. Let's dig to that. It's interesting, appliance, everybody wants software, but then you talk to customers and they're like, "Well, we actually want appliances because we just want to put it in and it works." >> Travis: (laughs) Right. >> It performs great. So, what do we need to know about the appliance? What's the news there? >> Well, you know, part of the reason I gave you some of those stats to begin with is that we have this strong foundation of experience, but also intellectual property components that we've taken that have been battle tested in the market. And we've put them together in a new simple, integrated appliance that really combines the best of the target appliance capabilities we have with that modern, simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy to use and easy to scale interface for customers. >> So, the premise that I've been putting forth for months now, probably well over a year, is that data protection is becoming an extension of your cybersecurity strategies. So, I'm interested in your perspective on cyber recovery. Your specific news that you have there. >> Yeah, you know, we are in addition to simplifying things via the appliance, we are providing solutions for customers no matter where they're deploying. And cyber recovery, especially when it comes to cloud deployments, is an increasing area of interest and deployment that we see with our customers. So, what we're announcing today is that we're expanding our cyber recovery services to be available in Google Cloud. With this announcement, it means we're available in all three of the major clouds and it really provides customers the flexibility to secure their data no matter if they're running on-premises, in Acolo, at the Edge, in the public cloud. And the other nice thing about this announcement is that you have the ability to use Google Cloud as a cyber recovery vault that really allows customers to isolate critical data and they can recover that critical data from the vault back to on-premises or from that vault back to running their cyber protection or their data protection solutions in the public cloud. >> I always invoke my favorite Matt Baker here. "It's not a zero sum game", but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We could talk about Edge all day, but that's a different topic. Okay, so my other question Travis, is how does this all fit into APEX? We hear a lot about APEX as a service. It's sort of the new hot thing. What's happening there? What's the news around APEX? >> Yeah, we've seen incredible momentum with our APEX solutions since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being providing solutions that allow customers to consume flexibly. And so, what we're announcing specifically is that we're expanding APEX Data Storage Services to include a data protection option. And it's like with all APEX offers, it's a pay-as-you-go solution. Really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is specify their base capacity. They specify their performance tier. They tell us do they want a one year term or a three year term and we take it from there. We get them up and running so they can start deploying and consuming flexibly. And as with many of our APEX solutions, it's a simple user experience all exposed through a unified APEX Console. >> Okay, so it's, you're keeping it simple, like I think large, medium, small. You know, we hear a lot about T-shirt sizes. I'm a big fan of that 'cause you guys should be smart enough to figure out, you know, based on my workload, what I need. How different is this? I wonder if you guys could address this. Jeff, maybe you can start. >> Sure, I'll start and then- >> Pitch me. >> You know, Travis, you jump in when I screw up here. >> Awesome. >> So, first I'd say we offer innovative multicloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis said, all the major public clouds. We have a broad ecosystem of workload support and I guess the great news is we're up to 80% more cost effective than any of the competition. >> Dave: 80%? >> 80% >> Hey, that's a big number. All right, Travis, what's your point of view on this? >> Yeah, I think number one, end-to-end data protection. We are that one stop shop that I talked about, whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the Edge, whether it's integrated appliances, target appliances, software. We have solutions that span the gamut as a service. I mentioned the APEX Solution as well. So really, we can provide solutions that help support customers and protect them, any workload, any cloud, anywhere that data lives. Edge, Core to Cloud. The other thing that we hear as a big differentiator for Dell, and Jeff touched on on this a little bit earlier, is our Intelligent Cyber Resiliency. We have a unique combination in the market where we can offer immutability or protection against deletion as sort of that first line of defense. But we can also offer a second level of defense, which is isolation, talking about data vaults or cyber vaults and cyber recovery. And more importantly, the intelligence that goes around that vault. It can look at detecting cyber attacks. It can help customers speed time to recovery. And really provides AI and ML to help early diagnosis of a cyber attack and fast recovery should a cyber attack occur. And if you look at customer adoption of that solution, specifically in the cloud, we have over 1300 customers utilizing PowerProtect Cyber Recovery. >> So, I think it's fair to say that your portfolio has obviously been a big differentiator. Whenever I talk to your finance team, Michael Dell, et cetera, that end-to-end capability, that your ability to manage throughout the supply chain. We actually just did an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is you, in a lot of respects, you're shifting the client's burden to your R&D. now they have a lot of work to do, so it's not like they can go home and just relax. But that's a key part of the partnership that I see. Jeff, I wonder if you could give us the final thoughts. >> Sure. Dell has a long history of being a trusted partner within IT, right? So, we have unmatched capabilities. Going back to your point, we have the broadest portfolio. We're a leader in every category that we participate in. We have a broad deep breadth of portfolio. We have scale. We have innovation that is just unmatched. Within data protection itself, we are the trusted market leader. No if, ands or buts. We're number one for both data protection software in appliances per IDC and we were just named for the 17th consecutive time the leader in the Gartner Magic Quadrant. So, bottom line is customers can count on Dell. >> Yeah, and I think again, we're seeing the evolution of data protection. It's not like the last 10 years. It's really becoming an adjacency and really, a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for coming. >> Thank you, sir. >> Dave. >> Travis, good to see you. All right, in a moment I'm going to come right back and summarize what we learned today, what actions you can take for your business. You're watching "The Future of Multicloud Data Protection" made possible by Dell in collaboration with theCUBE, your leader in enterprise and emerging tech coverage. Right back. >> Advertiser: In our data-driven world, protecting data has never been more critical. To guard against everything from cyber incidents to unplanned outages, you need a cyber resilient multicloud data protection strategy. >> It's not a matter of if you're going to get hacked, it's a matter of when. And I want to know that I can recover and continue to recover each day. >> It is important to have a cyber security and a cyber resiliency plan in place because the threat of cyber attack are imminent. >> Advertiser: PowerProtect Data Manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted partner and market leader. >> We chose PowerProtect Data Manager because we've been a strategic partner with Dell Technologies for roughly 20 years now. Our partnership with Dell Technologies has provided us with the ability to scale and grow as we've transitioned from 10 billion in assets to 20 billion. >> Advertiser: With PowerProtect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >> I'd installed it by myself, learn it by myself. It was very intuitive. >> While restoring your machine with PowerProtect Data Manager is fast, we can fully manage PowerProtect through the center. We can recover a whole machine in seconds. >> Instructor: Data Manager offers innovation such as transparent snapshots to simplify virtual machine backups, and it goes beyond backup and restore to provide valuable insights into protected data, workloads and VMs. >> In our previous environment, it would take anywhere from three to six hours a night to do a single backup of each VM. Now, we're backing up hourly and it takes two to three seconds with the transparent snapshots. >> Advertiser: With PowerProtect's Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >> Data is extremely important. We can't afford to lose any data. We need things just to work. >> Advertiser: Start your journey to modern data protection with Dell PowerProtect's Data Manager. Visit dell.com/powerprotectdatamanager >> We put forth the premise in our introduction that the worlds of data protection in cybersecurity must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default, this should include modern hardware and software. Now, in addition to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace that specifically address these customer concerns. And there were three that we talked about today. First, the PowerProtect Data Manager Appliance, which is an integrated system taking advantage of Dell's history in data protection, but adding new capabilities. And I want to come back to that in a moment. Second is Dell's PowerProtect Cyber Recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and Azure support. Now finally, Dell has made its target backup appliances available in APEX. You might recall, earlier this year we saw the introduction from Dell of APEX Backup Services and then in May at Dell Technologies World, we heard about the introduction of APEX Cyber Recovery Services. And today, Dell is making its most popular backup appliances available in APEX. Now, I want to come back to the PowerProtect Data Manager Appliance because it's a new integrated appliance and I asked Dell off camera, "Really what is so special about these new systems and what's really different from the competition?" Because look, everyone offers some kind of integrated appliance. So, I heard a number of items. Dell talked about simplicity and efficiency and containers and Kubernetes. So, I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. Dell claims that customers can deploy the system in half the time relative to the competition. So, we're talking minutes to deploy, and of course that's going to lead to much simpler management. And the second real difference I heard was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected, which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market and claims that in its benchmarks, the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now, this is based on Dell benchmarks, so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information, go to the Data Protection Page at dell.com. You can find that at dell.com/dataprotection. And all the content here and other videos are available on demand at theCUBE.net. Check out our series on the blueprint for trusted infrastructure, it's related and has some additional information. And go to siliconangle.com for all the news and analysis related to these and other announcements. This is Dave Vellante. Thanks for watching "The Future of Multicloud Protection" made possible by Dell, in collaboration with theCUBE, your leader in enterprise and emerging tech coverage. (upbeat music)
SUMMARY :
by the degree to which their businesses Good to see you. You know, I'd like to So, they need to make sure I mean, when you talk about and the reality of the multicloud sprawl. mean the data's there to help our customers with. Okay, so it's nuanced 'cause and gravity of the data. They have to do that into the system to be more resilient." and that very, you know, and confirms everything we just talked I'd like to look at the time series on how to handle this. in the cloud today. Okay, so you said modern, And that's from deployment to consumption, to also reduce your risk." that meets all the needs that now to data protection. Yeah. and dig into the news. So, that's the backdrop to the news today. Let's dig to that. What's the news there? and easy to scale interface for customers. So, the premise that that critical data from the to partner with the public cloud provider. that allow customers to consume flexibly. I'm a big fan of that 'cause you guys You know, Travis, you and I guess the great news is we're up your point of view on this? I mentioned the APEX Solution as well. to say that your portfolio Going back to your point, we of the organization Travis, good to see you. to unplanned outages, you and continue to recover each day. It is important to and security confidence you would expect from 10 billion in assets to 20 billion. to increase your efficiency I'd installed it by we can fully manage to simplify virtual machine backups, from three to six hours a and available whenever you need it. We need things just to work. journey to modern data protection and of course that's going to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Travis | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Matt Baker | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
10 billion | QUANTITY | 0.99+ |
47% | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
20 billion | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Sheltered Harbor | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one year | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
Second | QUANTITY | 0.99+ |
ISG | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
more than 1,700 customers | QUANTITY | 0.99+ |
Travis Vigil | PERSON | 0.99+ |
three year | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
more than 14 exabytes | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
three seconds | QUANTITY | 0.99+ |
The Future of Multicloud Protection | TITLE | 0.99+ |
three pieces | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
each day | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Two | QUANTITY | 0.99+ |
second level | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
over 1300 customers | QUANTITY | 0.99+ |
two senior executives | QUANTITY | 0.98+ |
dell.com/powerprotectdatamanager | OTHER | 0.98+ |
Gartner | ORGANIZATION | 0.98+ |
The Future of Dell Technologies
(upbeat music) >> The transformation of Dell into Dell EMC and now Dell Technologies has been one of the most remarkable stories in the history of the enterprise technology industry. The company has gone from a Wall Street darling rocket ship PC company, to a middling enterprise player forced to go private, to a debt-laden powerhouse that controlled one of the most valuable assets in enterprise tech i.e VMware. And now is a 100 billion dollar giant with a low margin business, a strong balance sheet, and the broadest hardware portfolio in the industry. Financial magic that Dell went through would make anyone's head spin. The last lever of Dell EMC, of the Dell EMC deal was detailed in Michael Dell's book, "Play Nice But Win." In a captivating chapter called Harry You and the Bolt from the Blue, Michael Dell described how he and his colleagues came up with the final straw of how to finance the deal. If you haven't read it, you should. And, of course, after years of successfully integrating EMC and becoming VMware's number one distribution channel, all of this culminated in the spin out of VMware from Dell in a massive wealth creation milestone. Pending, of course, the Broadcom acquisition of VMware. So where's that leave Dell and what does the future look like for this technology powerhouse? Hello, and welcome to theCUBE's exclusive coverage of Dell Technology Summit 2022. My name is Dave Vellante and I'll be hosting the program. Now, today in conjunction with the Dell Tech Summit, we're going to hear from four of Dell's senior executives Tom Sweet, who's the CFO of Dell Technologies. He's going to share his views on the company's position and opportunities going forward. He's going to answer the question, why is Dell a good long-term investment? Then we'll hear from Jeff Boudreau who's the president of Dell's ISG business. That unit is the largest profit driver of Dell. He's going to talk about the product angle and specifically, how Dell is thinking about solving the multi-cloud challenge. And then Sam Grocott who is the senior vice president of marketing will come on the program and give us the update on Apex, which is Dell's as-a-service offering, and then the new edge platform called Project Frontier. Now, it's also Cyber Security Awareness month that we're going to see if Sam has anything to say about that. Then finally, for a company that's nearly 40 years old, Dell actually has some pretty forward-thinking philosophies when it comes to its culture and workforce. And we're going to speak with Jennifer Saavedra who's Dell's chief human resource officer about hybrid work and how Dell is thinking about the future of work. However, before we get into all this, I want to share our independent perspectives on the company and some research that will introduce to frame the program. Now, as you know, we love data here at theCUBE and one of our partners, ETR has what we believe is the best spending intentions data for enterprise tech. So here's a graphic that shows ETR's proprietary net score methodology in the vertical axis. That's a measure of spending velocity. And on the x-axis is overlap of pervasiveness in the data sample. This is a cut for just the server, the storage, and the client sectors within the ETR taxonomy. So you can see Dell CSG products, laptops in particular are dominant on both the X and the Y dimensions. CSG is the client solutions group and accounts for nearly 60% of Dell's revenue and about half of its operating income. And then the arrow signifies that dot that represents Dell's ISG business that we're going to talk to Jeff Boudreau about. That's the infrastructure solutions group. Now, ISG accounts for the bulk of the remainder of Dell's business and it is, as I said, it's most profitable from a margin standpoint. It comprises the EMC storage business as well as the Dell server business and Dell's networking portfolio. And as a note, we didn't include networking in that cut. Had we done so, SISCO would've dominated the graphic. And frankly, Dell's networking business is an industry-leading in the same way that PCs, servers, and storage are. And as you can see, the data confirms the leadership position Dell has in its client side, its server and its storage sectors. But the nuance is look at that red dotted line at 40% on the vertical axis. That represents a highly elevated net score and every company in the sector is below that line. Now, we should mention that we also filtered the data for those companies with more than a 100 mentions in the survey, but the point remains the same. This is a mature business that generally is lower margin. Storage is the exception but cloud has put pressure on margins even in that business in addition to the server space. The last point on this graphic is we put a box around VMware and it's prominently present on both the X and Y dimensions. VMware participates with purely software-defined high margin offerings in these spaces, and it gives you a sense of what might have been had Dell chosen to hold onto that asset or spin it into the company. But let's face it, the alternatives from Michael Dell were just too attractive and it's unlikely that a spin in would've unlocked the value in the way a spin-out did, at least not in the near future. So let's take a look at the snapshot of Dell's financials to give you a sense of where the company stands today. Dell is a company with over a 100 billion dollars in revenue. Last quarter, it did more than 26 billion in revenue and grew at a quite amazing 9% rate for a company that size. But because it's a hardware company primarily, its margins are low with operating income 10% of revenue and at 21% gross margin. With VMware on Dell's income statement, before the spin its gross margins were in the low 30s. Now, Dell only spends about 2% of revenue on R&D because because it's so big, it's still a lot of money. And you can see it is cash flow positive, Dell's free cash flow over the trailing 12-month period is 3.7 billion but that's only 3.5% of trailing 12-month revenue. Dell's Apex and of course it's hardware maintenance business is recurring revenue and that is only about 5 billion in revenue and it's growing at 8% annually. Now having said that, it's the equivalent of Service now's total revenue. Of course, Service now has 23% operating margin and 16% free cash flow margin and more than $5 billion in cash on the balance sheet and an 85 billion dollar market cap. That's what software will do for you. Now, Dell, like most companies, is staring at a challenging macro environment with FX headwinds, inflation, et cetera. You've heard the story, and hence it's conservative and contracting revenue guidance. But the balance sheet transformation has been quite amazing thanks to VMware's cash flow. Michael Dell and his partners from Silver Lake et al, they put up around $4 billion of their own cash to buy EMC for $67 billion and of course got VMware in the process. Most of that financing was debt that Dell put on its balance sheet to do the transaction to the tune of $46 billion it added to the balance sheet debt. Now, Dell's debt, the core debt, net of its financing operation is now down to 16 billion and it has 7 billion in cash in the balance sheet. So dramatic delta from just a few years ago. So pretty good picture. But Dell, a 100 billion company, is still only valued at 28 billion or around 26 cents on the revenue dollar. HPE's revenue multiple is around 60 cents on the revenue dollar. HP Inc, Dell's laptop and PC competitor, is around 45 cents. IBM's revenue multiple is almost two times. By the way, IBM has more than $50 billion in debt thanks to the Red Hat acquisition. And Cisco has a revenue multiple, it's over 3X, about 3.3X currently. So is Dell undervalued? Well, based on these comparisons with its peers, I'd say yes and no. Dell's performance relative to its peers in the market is very strong. It's winning and has an extremely adept go to market machine. But it's lack of software content and it's margin profile leads one to believe that if it can continue to pull some valuation levers while entering new markets, it can get its valuation well above where it is today. So what are some of those levers and what might that look like going forward? Despite the fact that Dell doesn't have a huge software revenue component, since spinning out VMware, and it doesn't own a cloud, it plays in virtually every part of the hardware market. And it can provide infrastructure for pretty much any application, in any use case, in pretty much any industry, in pretty much any geography in the world and it can serve those customers. So its size is an advantage. However, the history for hardware-heavy companies that try to get bigger has some notable failures. Namely HP which had to split into two businesses, HP Inc and HPE, and IBM which has had in abysmal decade from a performance standpoint and has had to shrink to grow again and obviously do a massive $34 billion acquisition of Red Hat. So why will Dell do any better than these two? Well, it has a fantastic supply chain. It's a founder-led company which makes a cultural difference, in our view, and it's actually comfortable with a low margin software light business model. Most certainly, IBM wasn't comfortable with that and didn't have these characteristics and HP was kind of just incomprehensible at the end. So Dell in my opinion is a much better chance of doing well at a 100 billion or over, but we'll see how it navigates through the current headwinds as it's guiding down. Apex is essentially Dell's version of the cloud. Now remember, Dell got started late. HPE is further along from a model standpoint with GreenLake. But Dell has a larger portfolio so they're going to try to play on that advantage. But at the end of the day, these as-a-service offerings are simply ways to bring a utility model to existing customers and generate recurring revenue. And that's a good thing because customers will be loyal to an incumbent if it can deliver as-a-service and reduce risk for customers. But the real opportunity lies ahead, specifically Dell is embracing the cloud model. It took a while, but they're on board. As Matt Baker, Dell's senior vice president of corporate strategy likes to say, it's not a zero sum game. What he means by that is just because Dell doesn't own its own cloud, it doesn't mean Dell can't build value on top of hyperscale clouds, what we call super cloud. And that's Dell's strategy to take advantage of public cloud CapEx and connect on-prem to the cloud, create a unified experience across clouds and out to the edge. That's ambitious and technically it's non-trivial. But listen to Dell's vice chairman and co-COO Jeff Clarke explain this vision. Please play the clip. >> You said also technology and business models are tied together and enabler. If you believe that, then you have to believe that it's a business operating system that they want. They want to leverage whatever they can and at the end of the day, they have to differentiate what they do. >> No, that's exactly right. If I take that and what Dave was saying and I summarize it the following way. If we can take these cloud assets and capabilities, combine them in an orchestrated way to deliver a distributed platform, game over. >> Yeah, pretty interesting, right? John Freer called it a business operating system. Essentially, I think of it sometimes as a cloud operating system or cloud operating environment to drive new business value on top of the hyperscale CapEx. Now, is it really game over as Jeff Clarke said, if Dell can do that? I'd say if it had that today, it might be game over for the competition but this vision will take years to play out, and of course it's got to be funded. And now it's going to take time and in this industry, it tends to move, companies tend to move in lockstep. So as often as the case, it's going to come down to execution and Dell's ability to enter new markets that are ideally, at least from my perspective, higher margin. Data management, extending data protection into cyber security as an adjacency and, of course, edge at Telco slash 5G opportunities. All there for the taking. I mean, look, even if Dell doesn't go after more higher margin software content, it can thrive with a lower margin model just by penetrating new markets and throwing off cash from those markets. But by keeping close to customers and maybe through tuck in acquisitions, it might be able to find the next nugget beyond today's cloud and on-prem models. And the last thing I'll call out is ecosystem. I say here ecosystem, ecosystem, ecosystem. Because a defining characteristic of a cloud player is ecosystem and if Apex is Dell's cloud, it has the opportunity to expand that ecosystem dramatically. This is one of the company's biggest opportunities and challenges at the same time, in my view. It's just scratching the surface on its partner ecosystem. And it's ecosystem today is is both reseller heavy and tech partner heavy. And that's not a bad thing, but it's starting to evolve more rapidly. The snowflake deal is an example of up to stack evolution. But I'd like to see much more out of that Snowflake relationship and more relationships like that. Specifically, I'd like to see more momentum with data and database. And if we live at a data heavy world, which we do, where the data and the database and data management offerings coexist and are super important to customers, I'd like to see that inside of Apex. I'd like to see that data play beyond storage which is really where it is today and it's early days. The point is, with Dell's go to market advantage, which company wouldn't treat Dell like the on-prem, hybrid, edge, super cloud player, that I want to partner with to drive more business? You'd be crazy not to. But Dell has a lot on its plate and we'd like to see some serious acceleration on the ecosystem front. In other words, Dell as both a selling partner and a business enabler with its platform. Its programmable infrastructure as-a-service. And that is a moving target that will rapidly involve. And, of course, we'll be here watching and reporting. So thanks for watching this preview of Dell Technology Summit 2022. I'm Dave Vellante, we hope you enjoy the rest of the program. (upbeat music)
SUMMARY :
and every company in the and at the end of the day, and I summarize it the following way. it has the opportunity to expand
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
Jennifer Saavedra | PERSON | 0.99+ |
Tom Sweet | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Sam Grocott | PERSON | 0.99+ |
Matt Baker | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Jeff Clarke | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
HP Inc | ORGANIZATION | 0.99+ |
Sam | PERSON | 0.99+ |
John Freer | PERSON | 0.99+ |
3.7 billion | QUANTITY | 0.99+ |
SISCO | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
9% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
7 billion | QUANTITY | 0.99+ |
85 billion | QUANTITY | 0.99+ |
23% | QUANTITY | 0.99+ |
$46 billion | QUANTITY | 0.99+ |
Last quarter | DATE | 0.99+ |
21% | QUANTITY | 0.99+ |
Apex | ORGANIZATION | 0.99+ |
28 billion | QUANTITY | 0.99+ |
$67 billion | QUANTITY | 0.99+ |
16% | QUANTITY | 0.99+ |
8% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
more than $5 billion | QUANTITY | 0.99+ |
more than $50 billion | QUANTITY | 0.99+ |
12-month | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
100 billion | QUANTITY | 0.99+ |
16 billion | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Angelo Fausti & Caleb Maclachlan | The Future is Built on InfluxDB
>> Okay. We're now going to go into the customer panel, and we'd like to welcome Angelo Fausti, who's a software engineer at the Vera C. Rubin Observatory, and Caleb Maclachlan who's senior spacecraft operations software engineer at Loft Orbital. Guys, thanks for joining us. You don't want to miss folks this interview. Caleb, let's start with you. You work for an extremely cool company, you're launching satellites into space. Of course doing that is highly complex and not a cheap endeavor. Tell us about Loft Orbital and what you guys do to attack that problem. >> Yeah, absolutely. And thanks for having me here by the way. So Loft Orbital is a company that's a series B startup now, who, and our mission basically is to provide rapid access to space for all kinds of customers. Historically, if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, have a big software teams, and then eventually worry about, a bunch like, just a lot of very specialized engineering. And what we're trying to do is change that from a super specialized problem that has an extremely high barrier of access, to a infrastructure problem. So that it's almost as simple as deploying a VM in AWS or GCP is getting your programs, your mission deployed on orbit with access to different sensors, cameras, radios, stuff like that. So, that's kind of our mission and just to give a really brief example of the kind of customer that we can serve. There's a really cool company called Totum Labs, who is working on building IoT cons, an IoT constellation for, internet of things, basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor IoT which means you have this little modem inside a container that container that you track from anywhere in the world as it's going across the ocean. So, and it's really little, and they've been able to stay a small startup that's focused on their product, which is the, that super crazy, complicated, cool radio, while we handle the whole space segment for them, which just, you know, before Loft was really impossible. So that's our mission is providing space infrastructure as a service. We are kind of groundbreaking in this area and we're serving a huge variety of customers with all kinds of different missions, and obviously generating a ton of data in space that we've got to handle. >> Yeah. So amazing Caleb, what you guys do. Now, I know you were lured to the skies very early in your career, but how did you kind of land in this business? >> Yeah, so, I guess just a little bit about me. For some people, they don't necessarily know what they want to do like earlier in their life. For me I was five years old and I knew I want to be in the space industry. So, I started in the Air Force, but have stayed in the space industry my whole career and been a part of, this is the fifth space startup that I've been a part of actually. So, I've kind of started out in satellites, spent some time in working in the launch industry on rockets, then, now I'm here back in satellites and honestly, this is the most exciting of the different space startups that I've been a part of. >> Super interesting. Okay. Angelo, let's talk about the Rubin Observatory. Vera C. Rubin, famous woman scientist, galaxy guru. Now you guys, the Observatory, you're up way up high, you get a good look at the Southern sky. And I know COVID slowed you guys down a bit, but no doubt you continued to code away on the software. I know you're getting close, you got to be super excited, give us the update on the Observatory and your role. >> All right. So, yeah. Rubin is a state of the art observatory that is in construction on a remote mountain in Chile. And, with Rubin we'll conduct the large survey of space and time. We're going to observe the sky with eight meter optical telescope and take 1000 pictures every night with 2.2 Gigapixel camera. And we are going to do that for 10 years, which is the duration of the survey. >> Yeah, amazing project. Now, you earned a doctor of philosophy so you probably spent some time thinking about what's out there, and then you went out to earn a PhD in astronomy and astrophysics. So, this is something that you've been working on for the better part of your career, isn't it? >> Yeah, that's right, about 15 years. I studied physics in college. Then I got a PhD in astronomy. And, I worked for about five years in another project, the Dark Energy Survey before joining Rubin in 2015. >> Yeah, impressive. So it seems like both your organizations are looking at space from two different angles. One thing you guys both have in common of course is software, and you both use InfluxDB as part of your data infrastructure. How did you discover InfluxDB, get into it? How do you use the platform? Maybe Caleb you could start. >> Yeah, absolutely. So, the first company that I extensively used InfluxDB in, was a launch startup called Astra. And we were in the process of designing our first generation rocket there, and testing the engines, pumps, everything that goes into a rocket. And, when I joined the company our data story was not very mature. We were collecting a bunch of data in LabVIEW and engineers were taking that over to MATLAB to process it. And at first, there, you know, that's the way that a lot of engineers and scientists are used to working. And at first that was, like people weren't entirely sure that that was, that needed to change. But, it's, something, the nice thing about InfluxDB is that, it's so easy to deploy. So as, our software engineering team was able to get it deployed and, up and running very quickly and then quickly also backport all of the data that we collected this far into Influx. And, what was amazing to see and is kind of the super cool moment with Influx is, when we hooked that up to Grafana, Grafana as the visualization platform we used with Influx, 'cause it works really well with it. There was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data, where they could just almost instantly easily discover data that they hadn't been able to see before, and take the manual processes that they would run after a test and just throw those all in Influx and have live data as tests were coming, and, I saw them implementing like crazy rocket equation type stuff in Influx, and it just was totally game changing for how we tested. >> So Angelo, I was explaining in my open, that you could add a column in a traditional RDBMS and do time series, but with the volume of data that you're talking about in the example that Caleb just gave, you have to have a purpose built time series database. Where did you first learn about InfluxDB? >> Yeah, correct. So, I work with the data management team, and my first project was the record metrics that measured the performance of our software, the software that we used to process the data. So I started implementing that in our relational database. But then I realized that in fact I was dealing with time series data and I should really use a solution built for that. And then I started looking at time series databases and I found InfluxDB, and that was back in 2018. The, another use for InfluxDB that I'm also interested is the visits database. If you think about the observations, we are moving the telescope all the time and pointing to specific directions in the sky and taking pictures every 30 seconds. So that itself is a time series. And every point in that time series, we call a visit. So we want to record the metadata about those visits in InfluxDB. That time series is going to be 10 years long, with about 1000 points every night. It's actually not too much data compared to other problems. It's really just a different time scale. >> The telescope at the Rubin Observatory is like, pun intended, I guess the star of the show. And I believe I read that it's going to be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hubble's widest camera view, which is amazing. Like, that's like 40 moons in an image, amazingly fast as well. What else can you tell us about the telescope? >> This telescope it has to move really fast. And, it also has to carry the primary mirror which is an eight meter piece of glass. It's very heavy. And it has to carry a camera which has about the size of a small car. And this whole structure weighs about 300 tons. For that to work, the telescope needs to be very compact and stiff. And one thing that's amazing about it's design is that, the telescope, this 300 tons structure, it sits on a tiny film of oil, which has the diameter of human hair. And that makes an, almost zero friction interface. In fact, a few people can move this enormous structure with only their hands. As you said, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So, each image has, in diameter the size of about seven full moons. And, with that, we can map the entire sky in only three days. And of course, during operations everything's controlled by software and it is automatic. There's a very complex piece of software called the Scheduler, which is responsible for moving the telescope, and the camera, which is recording 15 terabytes of data every night. >> And Angelo, all this data lands in InfluxDB, correct? And what are you doing with all that data? >> Yeah, actually not. So we use InfluxDB to record engineering data and metadata about the observations. Like telemetry, events, and commands from the telescope. That's a much smaller data set compared to the images. But it is still challenging because you have some high frequency data that the system needs to keep up, and, we need to store this data and have it around for the lifetime of the project. >> Got it. Thank you. Okay, Caleb, let's bring you back in. Tell us more about the, you got these dishwasher size satellites, kind of using a multi-tenant model, I think it's genius. But tell us about the satellites themselves. >> Yeah, absolutely. So, we have in space some satellites already that as you said, are like dishwasher, mini fridge kind of size. And we're working on a bunch more that are a variety of sizes from shoebox to, I guess, a few times larger than what we have today. And it is, we do shoot to have effectively something like a multi-tenant model where we will buy a bus off the shelf. The bus is what you can kind of think of as the core piece of the satellite, almost like a motherboard or something where it's providing the power, it has the solar panels, it has some radios attached to it. It handles the attitude control, basically steers the spacecraft in orbit, and then we build also in-house, what we call our payload hub which is, has all, any customer payloads attached and our own kind of Edge processing sort of capabilities built into it. And, so we integrate that, we launch it, and those things because they're in lower Earth orbit, they're orbiting the earth every 90 minutes. That's, seven kilometers per second which is several times faster than a speeding bullet. So we have one of the unique challenges of operating spacecraft in lower Earth orbit is that generally you can't talk to them all the time. So, we're managing these things through very brief windows of time, where we get to talk to them through our ground sites, either in Antarctica or in the North pole region. >> Talk more about how you use InfluxDB to make sense of this data through all this tech that you're launching into space. >> We basically, previously we started off when I joined the company, storing all of that as Angelo did in a regular relational database. And we found that it was so slow and the size of our data would balloon over the course of a couple days to the point where we weren't able to even store all of the data that we were getting. So we migrated to InfluxDB to store our time series telemetry from the spacecraft. So, that's things like power level, voltage, currents, counts, whatever metadata we need to monitor about the spacecraft, we now store that in InfluxDB. And that has, now we can actually easily store the entire volume of data for the mission life so far without having to worry about the size bloating to an unmanageable amount, and we can also seamlessly query large chunks of data. Like if I need to see, you know, for example, as an operator, I might want to see how my battery state of charge is evolving over the course of the year, I can have, plot in an Influx that loads that in a fraction of a second for a year's worth of data because it does, intelligent, it can intelligently group the data by assigning time interval. So, it's been extremely powerful for us to access the data. And, as time has gone on, we've gradually migrated more and more of our operating data into Influx. >> Yeah. Let's talk a little bit about, we throw this term around a lot of, you know, data driven, a lot of companies say, "Oh yes, we're data driven." But you guys really are, I mean, you got data at the core. Caleb, what does that mean to you? >> Yeah, so, you know, I think the, and the clearest example of when I saw this be like totally game changing is what I mentioned before at Astra where our engineer's feedback loop went from a lot of kind of slow researching, digging into the data to like an instant, instantaneous almost, seeing the data, making decisions based on it immediately rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. But to give another practical example, as I said, we have a huge amount of data that comes down every orbit and we need to be able to ingest all of that data almost instantaneously and provide it to the operator in near real time, about a second worth of latency is all that's acceptable for us to react to see what is coming down from the spacecraft. And building that pipeline is challenging from a software engineering standpoint. My primary language is Python which isn't necessarily that fast. So what we've done is started, and the goal of being data-driven is publish metrics on individual, how individual pieces of our data processing pipeline are performing into Influx as well. And we do that in production as well as in dev. So we have kind of a production monitoring flow. And what that has done is allow us to make intelligent decisions on our software development roadmap where it makes the most sense for us to focus our development efforts in terms of improving our software efficiency, just because we have that visibility into where the real problems are. And sometimes we've found ourselves before we started doing this, kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. But now that we're being a bit more data driven there, we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scale from supporting a couple of satellites to supporting many, many satellites at once. >> Yeah, of course is how you reduced those dead ends. Maybe Angelo you could talk about what sort of data-driven means to you and your teams. >> I would say that, having real time visibility to the telemetry data and metrics is crucial for us. We need to make sure that the images that we collect with the telescope have good quality, and, that they are within the specifications to meet our science goals. And so if they are not, we want to know that as soon as possible and then start fixing problems. >> Caleb, what are your sort of event, you know, intervals like? >> So I would say that, as of today on the spacecraft, the event, the level of timing that we deal with probably tops out at about 20 Hertz, 20 measurements per second on things like our gyroscopes. But, the, I think the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications and I'll give an example from when I worked at, on the rockets at Astra. There, our baseline data rate that we would ingest data during a test is 500 Hertz. So 500 samples per second, and in some cases we would actually need to ingest much higher rate data, even up to like 1.5 kilohertz, so extremely, extremely high precision data there where timing really matters a lot. And, you know, I can, one of the really powerful things about Influx is the fact that it can handle this. That's one of the reasons we chose it, because, there's, times when we're looking at the results of a firing where you're zooming in, you know, I talked earlier about how on my current job we often zoom out to look at a year's worth of data. You're zooming in to where your screen is preoccupied by a tiny fraction of a second, and you need to see same thing as Angelo just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers, so that can be something like, "Hey, I opened this valve at exactly this time," and that goes, we want to have that at, micro, or even nanosecond precision so that we know, okay, we saw a spike in chamber pressure at this exact moment, was that before or after this valve opened? That kind of visibility is critical in these kind of scientific applications, and absolutely game changing to be able to see that in near real time, and with, a really easy way for engineers to be able to visualize this data themselves without having to wait for us software engineers to go build it for them. >> Can the scientists do self-serve or do you have to design and build all the analytics and queries for your scientists? >> Well, I think that's absolutely, from my perspective that's absolutely one of the best things about Influx and what I've seen be game changing is that, generally I'd say anyone can learn to use Influx. And honestly, most of our users might not even know they're using Influx, because, the interface that we expose to them is Grafana, which is a generic graphing, open source graphing library that is very similar to Influx zone Chronograf. >> Sure. >> And what it does is, it provides this almost, it's a very intuitive UI for building your queries. So, you choose a measurement and it shows a dropdown of available measurements. And then you choose the particular fields you want to look at, and again, that's a dropdown. So, it's really easy for our users to discover and there's kind of point and click options for doing math, aggregations. You can even do like perfect kind of predictions all within Grafana, the Grafana user interface, which is really just a wrapper around the APIs and functionality that Influx provides. >> Putting data in the hands of those who have the context, the domain experts is key. Angelo, is it the same situation for you, is it self-serve? >> Yeah, correct. As I mentioned before, we have the astronomers making their own dashboards because they know what exactly what they need to visualize. >> Yeah, I mean, it's all about using the right tool for the job. I think for us, when I joined the company we weren't using InfluxDB and we were dealing with serious issues of the database growing to an incredible size extremely quickly, and being unable to like even querying short periods of data was taking on the order of seconds, which is just not possible for operations. >> Guys, this has been really formative, it's pretty exciting to see how the edge, is mountaintops, lower Earth orbits, I mean space is the ultimate edge, isn't it? I wonder if you could answer two questions to wrap here. You know, what comes next for you guys? And is there something that you're really excited about that you're working on? Caleb maybe you could go first and then Angelo you can bring us home. >> Basically what's next for Loft Orbital is more satellites, a greater push towards infrastructure, and really making, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, making that happen. It's extremely exciting, an extremely exciting time to be in this company and to be in this industry as a whole. Because there are so many interesting applications out there, so many cool ways of leveraging space that people are taking advantage of, and with companies like SpaceX and the, now rapidly lowering cost of launch it's just a really exciting place to be in. We're launching more satellites, we are scaling up for some constellations, and our ground system has to be improved to match. So, there's a lot of improvements that we're working on to really scale up our control software to be best in class and make it capable of handling such a large workload, so. >> Are you guys hiring? >> We are absolutely hiring, so I would, we have positions all over the company, so, we need software engineers, we need people who do more aerospace specific stuff. So absolutely, I'd encourage anyone to check out the Loft Orbital website, if this is at all interesting. >> All right, Angelo, bring us home. >> Yeah. So what's next for us is really getting this telescope working and collecting data. And when that's happened is going to be just a deluge of data coming out of this camera and handling all that data is going to be really challenging. Yeah, I want to be here for that, I'm looking forward. Like for next year we have like an important milestone, which is our commissioning camera, which is a simplified version of the full camera, it's going to be on sky, and so yeah, most of the system has to be working by then. >> Nice. All right guys, with that we're going to end it. Thank you so much, really fascinating, and thanks to InfluxDB for making this possible, really groundbreaking stuff, enabling value creation at the Edge, in the cloud, and of course, beyond at the space. So, really transformational work that you guys are doing, so congratulations and really appreciate the broader community. I can't wait to see what comes next from having this entire ecosystem. Now, in a moment, I'll be back to wrap up. This is Dave Vellante, and you're watching theCUBE, the leader in high tech enterprise coverage. >> Welcome. Telegraf is a popular open source data collection agent. Telegraf collects data from hundreds of systems like IoT sensors, cloud deployments, and enterprise applications. It's used by everyone from individual developers and hobbyists, to large corporate teams. The Telegraf project has a very welcoming and active Open Source community. Learn how to get involved by visiting the Telegraf GitHub page. Whether you want to contribute code, improve documentation, participate in testing, or just show what you're doing with Telegraf. We'd love to hear what you're building. >> Thanks for watching Moving the World with InfluxDB, made possible by Influx Data. I hope you learned some things and are inspired to look deeper into where time series databases might fit into your environment. If you're dealing with large and or fast data volumes, and you want to scale cost effectively with the highest performance, and you're analyzing metrics and data over time, times series databases just might be a great fit for you. Try InfluxDB out. You can start with a free cloud account by clicking on the link in the resources below. Remember, all these recordings are going to be available on demand of thecube.net and influxdata.com, so check those out. And poke around Influx Data. They are the folks behind InfluxDB, and one of the leaders in the space. We hope you enjoyed the program, this is Dave Vellante for theCUBE, we'll see you soon. (upbeat music)
SUMMARY :
and what you guys do of the kind of customer that we can serve. So amazing Caleb, what you guys do. of the different space startups the Rubin Observatory. Rubin is a state of the art observatory and then you went out to the Dark Energy Survey and you both use InfluxDB and is kind of the super in the example that Caleb just gave, the software that we that it's going to be the first and the camera, that the system needs to keep up, let's bring you back in. is that generally you can't to make sense of this data all of the data that we were getting. But you guys really are, I digging into the data to like an instant, means to you and your teams. the images that we collect of the ability to have high precision data because, the interface that and functionality that Influx provides. Angelo, is it the same situation for you, we have the astronomers and we were dealing with and then Angelo you can bring us home. and to be in this industry as a whole. out the Loft Orbital website, most of the system has and of course, beyond at the space. and hobbyists, to large corporate teams. and one of the leaders in the space.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Caleb | PERSON | 0.99+ |
Caleb Maclachlan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Angelo Fausti | PERSON | 0.99+ |
Loft Orbital | ORGANIZATION | 0.99+ |
Chile | LOCATION | 0.99+ |
Totum Labs | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
10 years | QUANTITY | 0.99+ |
Antarctica | LOCATION | 0.99+ |
1000 pictures | QUANTITY | 0.99+ |
SpaceX | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
15 terabytes | QUANTITY | 0.99+ |
40 moons | QUANTITY | 0.99+ |
Vera C. Rubin | PERSON | 0.99+ |
Influx | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
300 tons | QUANTITY | 0.99+ |
500 Hertz | QUANTITY | 0.99+ |
Angelo | PERSON | 0.99+ |
two questions | QUANTITY | 0.99+ |
earth | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
Telegraf | ORGANIZATION | 0.99+ |
Astra | ORGANIZATION | 0.99+ |
InfluxDB | TITLE | 0.99+ |
today | DATE | 0.99+ |
2.2 Gigapixel | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
each image | QUANTITY | 0.99+ |
thecube.net | OTHER | 0.99+ |
North pole | LOCATION | 0.99+ |
first project | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Earth | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
eight meter | QUANTITY | 0.98+ |
first generation | QUANTITY | 0.98+ |
Vera C. Rubin Observatory | ORGANIZATION | 0.98+ |
three orders | QUANTITY | 0.98+ |
influxdata.com | OTHER | 0.98+ |
1.5 kilohertz | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
first company | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
Moving the World | TITLE | 0.97+ |
Grafana | TITLE | 0.97+ |
two different angles | QUANTITY | 0.97+ |
about 1000 points | QUANTITY | 0.97+ |
Rubin Observatory | LOCATION | 0.96+ |
hundreds of systems | QUANTITY | 0.96+ |
The Future Is Built On InFluxDB
>>Time series data is any data that's stamped in time in some way that could be every second, every minute, every five minutes, every hour, every nanosecond, whatever it might be. And typically that data comes from sources in the physical world like devices or sensors, temperature, gauges, batteries, any device really, or things in the virtual world could be software, maybe it's software in the cloud or data and containers or microservices or virtual machines. So all of these items, whether in the physical or virtual world, they're generating a lot of time series data. Now time series data has been around for a long time, and there are many examples in our everyday lives. All you gotta do is punch up any stock, ticker and look at its price over time and graphical form. And that's a simple use case that anyone can relate to and you can build timestamps into a traditional relational database. >>You just add a column to capture time and as well, there are examples of log data being dumped into a data store that can be searched and captured and ingested and visualized. Now, the problem with the latter example that I just gave you is that you gotta hunt and Peck and search and extract what you're looking for. And the problem with the former is that traditional general purpose databases they're designed as sort of a Swiss army knife for any workload. And there are a lot of functions that get in the way and make them inefficient for time series analysis, especially at scale. Like when you think about O T and edge scale, where things are happening super fast, ingestion is coming from many different sources and analysis often needs to be done in real time or near real time. And that's where time series databases come in. >>They're purpose built and can much more efficiently support ingesting metrics at scale, and then comparing data points over time, time series databases can write and read at significantly higher speeds and deal with far more data than traditional database methods. And they're more cost effective instead of throwing processing power at the problem. For example, the underlying architecture and algorithms of time series databases can optimize queries and they can reclaim wasted storage space and reuse it. At scale time, series databases are simply a better fit for the job. Welcome to moving the world with influx DB made possible by influx data. My name is Dave Valante and I'll be your host today. Influx data is the company behind InfluxDB. The open source time series database InfluxDB is designed specifically to handle time series data. As I just explained, we have an exciting program for you today, and we're gonna showcase some really interesting use cases. >>First, we'll kick it off in our Palo Alto studios where my colleague, John furrier will interview Evan Kaplan. Who's the CEO of influx data after John and Evan set the table. John's gonna sit down with Brian Gilmore. He's the director of IOT and emerging tech at influx data. And they're gonna dig into where influx data is gaining traction and why adoption is occurring and, and why it's so robust. And they're gonna have tons of examples and double click into the technology. And then we bring it back here to our east coast studios, where I get to talk to two practitioners, doing amazing things in space with satellites and modern telescopes. These use cases will blow your mind. You don't want to miss it. So thanks for being here today. And with that, let's get started. Take it away. Palo Alto. >>Okay. Today we welcome Evan Kaplan, CEO of influx data, the company behind influx DB. Welcome Evan. Thanks for coming on. >>Hey John, thanks for having me >>Great segment here on the influx DB story. What is the story? Take us through the history. Why time series? What's the story >><laugh> so the history history is actually actually pretty interesting. Um, Paul dicks, my partner in this and our founder, um, super passionate about developers and developer experience. And, um, he had worked on wall street building a number of time series kind of platform trading platforms for trading stocks. And from his point of view, it was always what he would call a yak shave, which means you had to do a ton of work just to start doing work, which means you had to write a bunch of extrinsic routines. You had to write a bunch of application handling on existing relational databases in order to come up with something that was optimized for a trading platform or a time series platform. And he sort of, he just developed this real clear point of view is this is not how developers should work. And so in 2013, he went through why Combinator and he built something for, he made his first commit to open source in flu DB at the end of 2013. And, and he basically, you know, from my point of view, he invented modern time series, which is you start with a purpose-built time series platform to do these kind of workloads. And you get all the benefits of having something right outta the box. So a developer can be totally productive right away. >>And how many people in the company what's the history of employees and stuff? >>Yeah, I think we're, I, you know, I always forget the number, but it's something like 230 or 240 people now. Um, the company, I joined the company in 2016 and I love Paul's vision. And I just had a strong conviction about the relationship between time series and IOT. Cuz if you think about it, what sensors do is they speak time, series, pressure, temperature, volume, humidity, light, they're measuring they're instrumenting something over time. And so I thought that would be super relevant over long term and I've not regretted it. >>Oh no. And it's interesting at that time, go back in the history, you know, the role of databases, well, relational database is the one database to rule the world. And then as clouds started coming in, you starting to see more databases, proliferate types of databases and time series in particular is interesting. Cuz real time has become super valuable from an application standpoint, O T which speaks time series means something it's like time matters >>Time. >>Yeah. And sometimes data's not worth it after the time, sometimes it worth it. And then you get the data lake. So you have this whole new evolution. Is this the momentum? What's the momentum, I guess the question is what's the momentum behind >>You mean what's causing us to grow. So >>Yeah, the time series, why is time series >>And the >>Category momentum? What's the bottom line? >>Well, think about it. You think about it from a broad, broad sort of frame, which is where, what everybody's trying to do is build increasingly intelligent systems, whether it's a self-driving car or a robotic system that does what you want to do or a self-healing software system, everybody wants to build increasing intelligent systems. And so in order to build these increasing intelligent systems, you have to instrument the system well, and you have to instrument it over time, better and better. And so you need a tool, a fundamental tool to drive that instrumentation. And that's become clear to everybody that that instrumentation is all based on time. And so what happened, what happened, what happened what's gonna happen? And so you get to these applications like predictive maintenance or smarter systems. And increasingly you want to do that stuff, not just intelligently, but fast in real time. So millisecond response so that when you're driving a self-driving car and the system realizes that you're about to do something, essentially you wanna be able to act in something that looks like real time, all systems want to do that, want to be more intelligent and they want to be more real time. And so we just happen to, you know, we happen to show up at the right time in the evolution of a >>Market. It's interesting near real time. Isn't good enough when you need real time. >><laugh> yeah, it's not, it's not. And it's like, and it's like, everybody wants, even when you don't need it, ironically, you want it. It's like having the feature for, you know, you buy a new television, you want that one feature, even though you're not gonna use it, you decide that your buying criteria real time is a buying criteria >>For, so you, I mean, what you're saying then is near real time is getting closer to real time as possible, as fast as possible. Right. Okay. So talk about the aspect of data, cuz we're hearing a lot of conversations on the cube in particular around how people are implementing and actually getting better. So iterating on data, but you have to know when it happened to get, know how to fix it. So this is a big part of how we're seeing with people saying, Hey, you know, I wanna make my machine learning algorithms better after the fact I wanna learn from the data. Um, how does that, how do you see that evolving? Is that one of the use cases of sensors as people bring data in off the network, getting better with the data knowing when it happened? >>Well, for sure. So, so for sure, what you're saying is, is, is none of this is non-linear, it's all incremental. And so if you take something, you know, just as an easy example, if you take a self-driving car, what you're doing is you're instrumenting that car to understand where it can perform in the real world in real time. And if you do that, if you run the loop, which is I instrumented, I watch what happens, oh, that's wrong? Oh, I have to correct for that. I correct for that in the software. If you do that for a billion times, you get a self-driving car, but every system moves along that evolution. And so you get the dynamic of, you know, of constantly instrumenting watching the system behave and do it. And this and sets up driving car is one thing. But even in the human genome, if you look at some of our customers, you know, people like, you know, people doing solar arrays, people doing power walls, like all of these systems are getting smarter. >>Well, let's get into that. What are the top applications? What are you seeing for your, with in, with influx DB, the time series, what's the sweet spot for the application use case and some customers give some >>Examples. Yeah. So it's, it's pretty easy to understand on one side of the equation that's the physical side is sensors are sensors are getting cheap. Obviously we know that and they're getting the whole physical world is getting instrumented, your home, your car, the factory floor, your wrist, watch your healthcare, you name it. It's getting instrumented in the physical world. We're watching the physical world in real time. And so there are three or four sweet spots for us, but, but they're all on that side. They're all about IOT. So they're think about consumer IOT projects like Google's nest todo, um, particle sensors, um, even delivery engines like rapid who deliver the Instacart of south America, like anywhere there's a physical location do and that's on the consumer side. And then another exciting space is the industrial side factories are changing dramatically over time. Increasingly moving away from proprietary equipment to develop or driven systems that run operational because what, what has to get smarter when you're building, when you're building a factory is systems all have to get smarter. And then, um, lastly, a lot in the renewables sustainability. So a lot, you know, Tesla, lucid, motors, Cola, motors, um, you know, lots to do with electric cars, solar arrays, windmills, arrays, just anything that's gonna get instrumented that where that instrumentation becomes part of what the purpose >>Is. It's interesting. The convergence of physical and digital is happening with the data IOT. You mentioned, you know, you think of IOT, look at the use cases there, it was proprietary OT systems. Now becoming more IP enabled internet protocol and now edge compute, getting smaller, faster, cheaper AI going to the edge. Now you have all kinds of new capabilities that bring that real time and time series opportunity. Are you seeing IOT going to a new level? What was the, what's the IOT where's the IOT dots connecting to because you know, as these two cultures merge yeah. Operations, basically industrial factory car, they gotta get smarter, intelligent edge is a buzzword, but I mean, it has to be more intelligent. Where's the, where's the action in all this. So the >>Action, really, it really at the core, it's at the developer, right? Because you're looking at these things, it's very hard to get an off the shelf system to do the kinds of physical and software interaction. So the actions really happen at the developer. And so what you're seeing is a movement in the world that, that maybe you and I grew up in with it or OT moving increasingly that developer driven capability. And so all of these IOT systems they're bespoke, they don't come out of the box. And so the developer, the architect, the CTO, they define what's my business. What am I trying to do? Am I trying to sequence a human genome and figure out when these genes express theself or am I trying to figure out when the next heart rate monitor's gonna show up on my apple watch, right? What am I trying to do? What's the system I need to build. And so starting with the developers where all of the good stuff happens here, which is different than it used to be, right. Used to be you'd buy an application or a service or a SA thing for, but with this dynamic, with this integration of systems, it's all about bespoke. It's all about building >>Something. So let's get to the developer real quick, real highlight point here is the data. I mean, I could see a developer saying, okay, I need to have an application for the edge IOT edge or car. I mean, we're gonna have, I mean, Tesla's got applications of the car it's right there. I mean, yes, there's the modern application life cycle now. So take us through how this impacts the developer. Does it impact their C I C D pipeline? Is it cloud native? I mean, where does this all, where does this go to? >>Well, so first of all, you're talking about, there was an internal journey that we had to go through as a company, which, which I think is fascinating for anybody who's interested is we went from primarily a monolithic software that was open sourced to building a cloud native platform, which means we had to move from an agile development environment to a C I C D environment. So to a degree that you are moving your service, whether it's, you know, Tesla monitoring your car and updating your power walls, right. Or whether it's a solar company updating the arrays, right. To degree that that service is cloud. Then increasingly remove from an agile development to a C I C D environment, which you're shipping code to production every day. And so it's not just the developers, all the infrastructure to support the developers to run that service and that sort of stuff. I think that's also gonna happen in a big way >>When your customer base that you have now, and as you see, evolving with infl DB, is it that they're gonna be writing more of the application or relying more on others? I mean, obviously there's an open source component here. So when you bring in kind of old way, new way old way was I got a proprietary, a platform running all this O T stuff and I gotta write, here's an application. That's general purpose. Yeah. I have some flexibility, somewhat brittle, maybe not a lot of robustness to it, but it does its job >>A good way to think about this is versus a new way >>Is >>What so yeah, good way to think about this is what, what's the role of the developer slash architect CTO that chain within a large, within an enterprise or a company. And so, um, the way to think about it is I started my career in the aerospace industry <laugh> and so when you look at what Boeing does to assemble a plane, they build very, very few of the parts. Instead, what they do is they assemble, they buy the wings, they buy the engines, they assemble, actually, they don't buy the wings. It's the one thing they buy the, the material for the w they build the wings, cuz there's a lot of tech in the wings and they end up being assemblers smart assemblers of what ends up being a flying airplane, which is pretty big deal even now. And so what, what happens with software people is they have the ability to pull from, you know, the best of the open source world. So they would pull a time series capability from us. Then they would assemble that with, with potentially some ETL logic from somebody else, or they'd assemble it with, um, a Kafka interface to be able to stream the data in. And so they become very good integrators and assemblers, but they become masters of that bespoke application. And I think that's where it goes, cuz you're not writing native code for everything. >>So they're more flexible. They have faster time to market cuz they're assembling way faster and they get to still maintain their core competency. Okay. Their wings in this case, >>They become increasingly not just coders, but designers and developers. They become broadly builders is what we like to think of it. People who start and build stuff by the way, this is not different than the people just up the road Google have been doing for years or the tier one, Amazon building all their own. >>Well, I think one of the things that's interesting is is that this idea of a systems developing a system architecture, I mean systems, uh, uh, systems have consequences when you make changes. So when you have now cloud data center on premise and edge working together, how does that work across the system? You can't have a wing that doesn't work with the other wing kind of thing. >>That's exactly. But that's where the that's where the, you know, that that Boeing or that airplane building analogy comes in for us. We've really been thoughtful about that because IOT it's critical. So our open source edge has the same API as our cloud native stuff that has enterprise on pre edge. So our multiple products have the same API and they have a relationship with each other. They can talk with each other. So the builder builds it once. And so this is where, when you start thinking about the components that people have to use to build these services is that you wanna make sure, at least that base layer, that database layer, that those components talk to each other. >>So I'll have to ask you if I'm the customer. I put my customer hat on. Okay. Hey, I'm dealing with a lot. >>That mean you have a PO for <laugh> >>A big check. I blank check. If you can answer this question only if the tech, if, if you get the question right, I got all this important operation stuff. I got my factory, I got my self-driving cars. This isn't like trivial stuff. This is my business. How should I be thinking about time series? Because now I have to make these architectural decisions, as you mentioned, and it's gonna impact my application development. So huge decision point for your customers. What should I care about the most? So what's in it for me. Why is time series >>Important? Yeah, that's a great question. So chances are, if you've got a business that was, you know, 20 years old or 25 years old, you were already thinking about time series. You probably didn't call it that you built something on a Oracle or you built something on IBM's DB two, right. And you made it work within your system. Right? And so that's what you started building. So it's already out there. There are, you know, there are probably hundreds of millions of time series applications out there today. But as you start to think about this increasing need for real time, and you start to think about increasing intelligence, you think about optimizing those systems over time. I hate the word, but digital transformation. Then you start with time series. It's a foundational base layer for any system that you're gonna build. There's no system I can think of where time series, shouldn't be the foundational base layer. If you just wanna store your data and just leave it there and then maybe look it up every five years. That's fine. That's not time. Series time series is when you're building a smarter, more intelligent, more real time system. And the developers now know that. And so the more they play a role in building these systems, the more obvious it becomes. >>And since I have a PO for you and a big check, yeah. What is, what's the value to me as I, when I implement this, what's the end state, what's it look like when it's up and running? What's the value proposition for me. What's an >>So, so when it's up and running, you're able to handle the queries, the writing of the data, the down sampling of the data, they're transforming it in near real time. So that the other dependencies that a system that gets for adjusting a solar array or trading energy off of a power wall or some sort of human genome, those systems work better. So time series is foundational. It's not like it's, you know, it's not like it's doing every action that's above, but it's foundational to build a really compelling, intelligent system. I think that's what developers and archs are seeing now. >>Bottom line, final word. What's in it for the customer. What's what, what's your, um, what's your statement to the customer? What would you say to someone looking to do something in time series on edge? >>Yeah. So, so it's pretty clear to clear to us that if you're building, if you view yourself as being in the build business of building systems that you want 'em to be increasingly intelligent, self-healing autonomous. You want 'em to operate in real time that you start from time series. But I also wanna say what's in it for us influx what's in it for us is people are doing some amazing stuff. You know, I highlighted some of the energy stuff, some of the human genome, some of the healthcare it's hard not to be proud or feel like, wow. Yeah. Somehow I've been lucky. I've arrived at the right time, in the right place with the right people to be able to deliver on that. That's that's also exciting on our side of the equation. >>Yeah. It's critical infrastructure, critical, critical operations. >>Yeah. >>Yeah. Great stuff, Evan. Thanks for coming on. Appreciate this segment. All right. In a moment, Brian Gilmore director of IOT and emerging technology that influx day will join me. You're watching the cube leader in tech coverage. Thanks for watching >>Time series data from sensors systems and applications is a key source in driving automation and prediction in technologies around the world. But managing the massive amount of timestamp data generated these days is overwhelming, especially at scale. That's why influx data developed influx DB, a time series data platform that collects stores and analyzes data influx DB empowers developers to extract valuable insights and turn them into action by building transformative IOT analytics and cloud native applications, purpose built and optimized to handle the scale and velocity of timestamped data. InfluxDB puts the power in your hands with developer tools that make it easy to get started quickly with less code InfluxDB is more than a database. It's a robust developer platform with integrated tooling. That's written in the languages you love. So you can innovate faster, run in flex DB anywhere you want by choosing the provider and region that best fits your needs across AWS, Microsoft Azure and Google cloud flex DB is fast and automatically scalable. So you can spend time delivering value to customers, not managing clusters, take control of your time series data. So you can focus on the features and functionalities that give your applications a competitive edge. Get started for free with influx DB, visit influx data.com/cloud to learn more. >>Okay. Now we're joined by Brian Gilmore director of IOT and emerging technologies at influx data. Welcome to the show. >>Thank you, John. Great to be here. >>We just spent some time with Evan going through the company and the value proposition, um, with influx DV, what's the momentum, where do you see this coming from? What's the value coming out of this? >>Well, I think it, we're sort of hitting a point where the technology is, is like the adoption of it is becoming mainstream. We're seeing it in all sorts of organizations, everybody from like the most well funded sort of advanced big technology companies to the smaller academics, the startups and the managing of that sort of data that emits from that technology is time series and us being able to give them a, a platform, a tool that's super easy to use, easy to start. And then of course will grow with them is, is been key to us. Sort of, you know, riding along with them is they're successful. >>Evan was mentioning that time series has been on everyone's radar and that's in the OT business for years. Now, you go back since 20 13, 14, even like five years ago that convergence of physical and digital coming together, IP enabled edge. Yeah. Edge has always been kind of hyped up, but why now? Why, why is the edge so hot right now from an adoption standpoint? Is it because it's just evolution, the tech getting better? >>I think it's, it's, it's twofold. I think that, you know, there was, I would think for some people, everybody was so focused on cloud over the last probably 10 years. Mm-hmm <affirmative> that they forgot about the compute that was available at the edge. And I think, you know, those, especially in the OT and on the factory floor who weren't able to take Avan full advantage of cloud through their applications, you know, still needed to be able to leverage that compute at the edge. I think the big thing that we're seeing now, which is interesting is, is that there's like a hybrid nature to all of these applications where there's definitely some data that's generated on the edge. There's definitely done some data that's generated in the cloud. And it's the ability for a developer to sort of like tie those two systems together and work with that data in a very unified uniform way. Um, that's giving them the opportunity to build solutions that, you know, really deliver value to whatever it is they're trying to do, whether it's, you know, the, the out reaches of outer space or whether it's optimizing the factory floor. >>Yeah. I think, I think one of the things you also mentions genome too, dig big data is coming to the real world. And I think I, OT has been kind of like this thing for OT and, and in some use case, but now with the, with the cloud, all companies have an edge strategy now. So yeah, what's the secret sauce because now this is hot, hot product for the whole world and not just industrial, but all businesses. What's the secret sauce. >>Well, I mean, I think part of it is just that the technology is becoming more capable and that's especially on the hardware side, right? I mean, like technology compute is getting smaller and smaller and smaller. And we find that by supporting all the way down to the edge, even to the micro controller layer with our, um, you know, our client libraries and then working hard to make our applications, especially the database as small as possible so that it can be located as close to sort of the point of origin of that data in the edge as possible is, is, is fantastic. Now you can take that. You can run that locally. You can do your local decision making. You can use influx DB as sort of an input to automation control the autonomy that people are trying to drive at the edge. But when you link it up with everything that's in the cloud, that's when you get all of the sort of cloud scale capabilities of parallelized, AI and machine learning and all of that. >>So what's interesting is the open source success has been something that we've talked about a lot in the cube about how people are leveraging that you guys have users in the enterprise users that IOT market mm-hmm <affirmative>, but you got developers now. Yeah. Kind of together brought that up. How do you see that emerging? How do developers engage? What are some of the things you're seeing that developers are really getting into with InfluxDB >>What's? Yeah. Well, I mean, I think there are the developers who are building companies, right? And these are the startups and the folks that we love to work with who are building new, you know, new services, new products, things like that. And, you know, especially on the consumer side of IOT, there's a lot of that, just those developers. But I think we, you gotta pay attention to those enterprise developers as well, right? There are tons of people with the, the title of engineer in, in your regular enterprise organizations. And they're there for systems integration. They're there for, you know, looking at what they would build versus what they would buy. And a lot of them come from, you know, a strong, open source background and they, they know the communities, they know the top platforms in those spaces and, and, you know, they're excited to be able to adopt and use, you know, to optimize inside the business as compared to just building a brand new one. >>You know, it's interesting too, when Evan and I were talking about open source versus closed OT systems, mm-hmm <affirmative> so how do you support the backwards compatibility of older systems while maintaining open dozens of data formats out there? Bunch of standards, protocols, new things are emerging. Everyone wants to have a control plane. Everyone wants to leverage the value of data. How do you guys keep track of it all? What do you guys support? >>Yeah, well, I mean, I think either through direct connection, like we have a product called Telegraph, it's unbelievable. It's open source, it's an edge agent. You can run it as close to the edge as you'd like, it speaks dozens of different protocols in its own, right? A couple of which MQTT B, C U a are very, very, um, applicable to these T use cases. But then we also, because we are sort of not only open source, but open in terms of our ability to collect data, we have a lot of partners who have built really great integrations from their own middleware, into influx DB. These are companies like ke wear and high bite who are really experts in those downstream industrial protocols. I mean, that's a business, not everybody wants to be in. It requires some very specialized, very hard work and a lot of support, um, you know, and so by making those connections and building those ecosystems, we get the best of both worlds. The customers can use the platforms they need up to the point where they would be putting into our database. >>What's some of customer testimonies that they, that share with you. Can you share some anecdotal kind of like, wow, that's the best thing I've ever used. This really changed my business, or this is a great tech that's helped me in these other areas. What are some of the, um, soundbites you hear from customers when they're successful? >>Yeah. I mean, I think it ranges. You've got customers who are, you know, just finally being able to do the monitoring of assets, you know, sort of at the edge in the field, we have a customer who's who's has these tunnel boring machines that go deep into the earth to like drill tunnels for, for, you know, cars and, and, you know, trains and things like that. You know, they are just excited to be able to stick a database onto those tunnel, boring machines, send them into the depths of the earth and know that when they come out, all of that telemetry at a very high frequency has been like safely stored. And then it can just very quickly and instantly connect up to their, you know, centralized database. So like just having that visibility is brand new to them. And that's super important. On the other hand, we have customers who are way far beyond the monitoring use case, where they're actually using the historical records in the time series database to, um, like I think Evan mentioned like forecast things. So for predictive maintenance, being able to pull in the telemetry from the machines, but then also all of that external enrichment data, the metadata, the temperatures, the pressure is who is operating the machine, those types of things, and being able to easily integrate with platforms like Jupyter notebooks or, you know, all of those scientific computing and machine learning libraries to be able to build the models, train the models, and then they can send that information back down to InfluxDB to apply it and detect those anomalies, which >>Are, I think that's gonna be an, an area. I personally think that's a hot area because I think if you look at AI right now, yeah. It's all about training the machine learning albums after the fact. So time series becomes hugely important. Yeah. Cause now you're thinking, okay, the data matters post time. Yeah. First time. And then it gets updated the new time. Yeah. So it's like constant data cleansing data iteration, data programming. We're starting to see this new use case emerge in the data field. >>Yep. Yeah. I mean, I think you agree. Yeah, of course. Yeah. The, the ability to sort of handle those pipelines of data smartly, um, intelligently, and then to be able to do all of the things you need to do with that data in stream, um, before it hits your sort of central repository. And, and we make that really easy for customers like Telegraph, not only does it have sort of the inputs to connect up to all of those protocols and the ability to capture and connect up to the, to the partner data. But also it has a whole bunch of capabilities around being able to process that data, enrich it, reform at it, route it, do whatever you need. So at that point you're basically able to, you're playing your data in exactly the way you would wanna do it. You're routing it to different, you know, destinations and, and it's, it's, it's not something that really has been in the realm of possibility until this point. Yeah. Yeah. >>And when Evan was on it's great. He was a CEO. So he sees the big picture with customers. He was, he kinda put the package together that said, Hey, we got a system. We got customers, people are wanting to leverage our product. What's your PO they're sell. He's selling too as well. So you have that whole CEO perspective, but he brought up this notion that there's multiple personas involved in kind of the influx DB system architect. You got developers and users. Can you talk about that? Reality as customers start to commercialize and operationalize this from a commercial standpoint, you got a relationship to the cloud. Yep. The edge is there. Yep. The edge is getting super important, but cloud brings a lot of scale to the table. So what is the relationship to the cloud? Can you share your thoughts on edge and its relationship to the cloud? >>Yeah. I mean, I think edge, you know, edges, you can think of it really as like the local information, right? So it's, it's generally like compartmentalized to a point of like, you know, a single asset or a single factory align, whatever. Um, but what people do who wanna pro they wanna be able to make the decisions there at the edge locally, um, quickly minus the latency of sort of taking that large volume of data, shipping it to the cloud and doing something with it there. So we allow them to do exactly that. Then what they can do is they can actually downsample that data or they can, you know, detect like the really important metrics or the anomalies. And then they can ship that to a central database in the cloud where they can do all sorts of really interesting things with it. Like you can get that centralized view of all of your global assets. You can start to compare asset to asset, and then you can do those things like we talked about, whereas you can do predictive types of analytics or, you know, larger scale anomaly detections. >>So in this model you have a lot of commercial operations, industrial equipment. Yep. The physical plant, physical business with virtual data cloud all coming together. What's the future for InfluxDB from a tech standpoint. Cause you got open. Yep. There's an ecosystem there. Yep. You have customers who want operational reliability for sure. I mean, so you got organic <laugh> >>Yeah. Yeah. I mean, I think, you know, again, we got iPhones when everybody's waiting for flying cars. Right. So I don't know. We can like absolutely perfectly predict what's coming, but I think there are some givens and I think those givens are gonna be that the world is only gonna become more hybrid. Right. And then, you know, so we are going to have much more widely distributed, you know, situations where you have data being generated in the cloud, you have data gen being generated at the edge and then there's gonna be data generated sort sort of at all points in between like physical locations as well as things that are, that are very virtual. And I think, you know, we are, we're building some technology right now. That's going to allow, um, the concept of a database to be much more fluid and flexible, sort of more aligned with what a file would be like. >>And so being able to move data to the compute for analysis or move the compute to the data for analysis, those are the types of, of solutions that we'll be bringing to the customers sort of over the next little bit. Um, but I also think we have to start thinking about like what happens when the edge is actually off the planet. Right. I mean, we've got customers, you're gonna talk to two of them, uh, in the panel who are actually working with data that comes from like outside the earth, like, you know, either in low earth orbit or you know, all the way sort of on the other side of the universe. Yeah. And, and to be able to process data like that and to do so in a way it's it's we gotta, we gotta build the fundamentals for that right now on the factory floor and in the mines and in the tunnels. Um, so that we'll be ready for that one. >>I think you bring up a good point there because one of the things that's common in the industry right now, people are talking about, this is kind of new thinking is hyper scale's always been built up full stack developers, even the old OT world, Evan was pointing out that they built everything right. And the world's going to more assembly with core competency and IP and also property being the core of their apple. So faster assembly and building, but also integration. You got all this new stuff happening. Yeah. And that's to separate out the data complexity from the app. Yes. So space genome. Yep. Driving cars throws off massive data. >>It >>Does. So is Tesla, uh, is the car the same as the data layer? >>I mean the, yeah, it's, it's certainly a point of origin. I think the thing that we wanna do is we wanna let the developers work on the world, changing problems, the things that they're trying to solve, whether it's, you know, energy or, you know, any of the other health or, you know, other challenges that these teams are, are building against. And we'll worry about that time series data and the underlying data platform so that they don't have to. Right. I mean, I think you talked about it, uh, you know, for them just to be able to adopt the platform quickly, integrate it with their data sources and the other pieces of their applications. It's going to allow them to bring much faster time to market on these products. It's gonna allow them to be more iterative. They're gonna be able to do more sort of testing and things like that. And ultimately it will, it'll accelerate the adoption and the creation of >>Technology. You mentioned earlier in, in our talk about unification of data. Yeah. How about APIs? Cuz developers love APIs in the cloud unifying APIs. How do you view view that? >>Yeah, I mean, we are APIs, that's the product itself. Like everything, people like to think of it as sort of having this nice front end, but the front end is B built on our public APIs. Um, you know, and it, it allows the developer to build all of those hooks for not only data creation, but then data processing, data analytics, and then, you know, sort of data extraction to bring it to other platforms or other applications, microservices, whatever it might be. So, I mean, it is a world of APIs right now and you know, we, we bring a very sort of useful set of them for managing the time series data. These guys are all challenged with. It's >>Interesting. You and I were talking before we came on camera about how, um, data is, feels gonna have this kind of SRE role that DevOps had site reliability engineers, which manages a bunch of servers. There's so much data out there now. Yeah. >>Yeah. It's like reigning data for sure. And I think like that ability to be like one of the best jobs on the planet is gonna be to be able to like, sort of be that data Wrangler to be able to understand like what the data sources are, what the data formats are, how to be able to efficiently move that data from point a to point B and you know, to process it correctly so that the end users of that data aren't doing any of that sort of hard upfront preparation collection storage's >>Work. Yeah. That's data as code. I mean, data engineering is it is becoming a new discipline for sure. And, and the democratization is the benefit. Yeah. To everyone, data science get easier. I mean data science, but they wanna make it easy. Right. <laugh> yeah. They wanna do the analysis, >>Right? Yeah. I mean, I think, you know, it, it's a really good point. I think like we try to give our users as many ways as there could be possible to get data in and get data out. We sort of think about it as meeting them where they are. Right. So like we build, we have the sort of client libraries that allow them to just port to us, you know, directly from the applications and the languages that they're writing, but then they can also pull it out. And at that point nobody's gonna know the users, the end consumers of that data, better than those people who are building those applications. And so they're building these user interfaces, which are making all of that data accessible for, you know, their end users inside their organization. >>Well, Brian, great segment, great insight. Thanks for sharing all, all the complexities and, and IOT that you guys helped take away with the APIs and, and assembly and, and all the system architectures that are changing edge is real cloud is real. Yeah, absolutely. Mainstream enterprises. And you got developer attraction too, so congratulations. >>Yeah. It's >>Great. Well, thank any, any last word you wanna share >>Deal with? No, just, I mean, please, you know, if you're, if you're gonna, if you're gonna check out influx TV, download it, try out the open source contribute if you can. That's a, that's a huge thing. It's part of being the open source community. Um, you know, but definitely just, just use it. I think when once people use it, they try it out. They'll understand very, >>Very quickly. So open source with developers, enterprise and edge coming together all together. You're gonna hear more about that in the next segment, too. Right. Thanks for coming on. Okay. Thanks. When we return, Dave LAN will lead a panel on edge and data influx DB. You're watching the cube, the leader in high tech enterprise coverage. >>Why the startup, we move really fast. We find that in flex DB can move as fast as us. It's just a great group, very collaborative, very interested in manufacturing. And we see a bright future in working with influence. My name is Aaron Seley. I'm the CTO at HBI. Highlight's one of the first companies to focus on manufacturing data and apply the concepts of data ops, treat that as an asset to deliver to the it system, to enable applications like overall equipment effectiveness that can help the factory produce better, smarter, faster time series data. And manufacturing's really important. If you take a piece of equipment, you have the temperature pressure at the moment that you can look at to kind of see the state of what's going on. So without that context and understanding you can't do what manufacturers ultimately want to do, which is predict the future. >>Influx DB represents kind of a new way to storm time series data with some more advanced technology and more importantly, more open technologies. The other thing that influx does really well is once the data's influx, it's very easy to get out, right? They have a modern rest API and other ways to access the data. That would be much more difficult to do integrations with classic historians highlight can serve to model data, aggregate data on the shop floor from a multitude of sources, whether that be P C U a servers, manufacturing execution systems, E R P et cetera, and then push that seamlessly into influx to then be able to run calculations. Manufacturing is changing this industrial 4.0, and what we're seeing is influx being part of that equation. Being used to store data off the unified name space, we recommend InfluxDB all the time to customers that are exploring a new way to share data manufacturing called the unified name space who have open questions around how do I share this new data that's coming through my UNS or my QTT broker? How do I store this and be able to query it over time? And we often point to influx as a solution for that is a great brand. It's a great group of people and it's a great technology. >>Okay. We're now going to go into the customer panel and we'd like to welcome Angelo Fasi. Who's a software engineer at the Vera C Ruben observatory in Caleb McLaughlin whose senior spacecraft operations software engineer at loft orbital guys. Thanks for joining us. You don't wanna miss folks this interview, Caleb, let's start with you. You work for an extremely cool company. You're launching satellites into space. I mean, there, of course doing that is, is highly complex and not a cheap endeavor. Tell us about loft Orbi and what you guys do to attack that problem. >>Yeah, absolutely. And, uh, thanks for having me here by the way. Uh, so loft orbital is a, uh, company. That's a series B startup now, uh, who and our mission basically is to provide, uh, rapid access to space for all kinds of customers. Uh, historically if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, you know, have a big software teams, uh, and then eventually worry about, you know, a bunch like just a lot of very specialized engineering. And what we're trying to do is change that from a super specialized problem that has an extremely high barrier of access to a infrastructure problem. So that it's almost as simple as, you know, deploying a VM in, uh, AWS or GCP is getting your, uh, programs, your mission deployed on orbit, uh, with access to, you know, different sensors, uh, cameras, radios, stuff like that. >>So that's, that's kind of our mission. And just to give a really brief example of the kind of customer that we can serve. Uh, there's a really cool company called, uh, totem labs who is working on building, uh, IOT cons, an IOT constellation for in of things, basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor T, which means you have this little modem inside a container container that you, that you track from anywhere in the world as it's going across the ocean. Um, so they're, it's really little and they've been able to stay a small startup that's focused on their product, which is the, uh, that super crazy complicated, cool radio while we handle the whole space segment for them, which just, you know, before loft was really impossible. So that's, our mission is, uh, providing space infrastructure as a service. We are kind of groundbreaking in this area and we're serving, you know, a huge variety of customers with all kinds of different missions, um, and obviously generating a ton of data in space, uh, that we've gotta handle. Yeah. >>So amazing Caleb, what you guys do, I, now I know you were lured to the skies very early in your career, but how did you kinda land on this business? >>Yeah, so, you know, I've, I guess just a little bit about me for some people, you know, they don't necessarily know what they wanna do like early in their life. For me, I was five years old and I knew, you know, I want to be in the space industry. So, you know, I started in the air force, but have, uh, stayed in the space industry, my whole career and been a part of, uh, this is the fifth space startup that I've been a part of actually. So, you know, I've, I've, uh, kind of started out in satellites, did spent some time in working in, uh, the launch industry on rockets. Then, uh, now I'm here back in satellites and you know, honestly, this is the most exciting of the difference based startups. That I've been a part of >>Super interesting. Okay. Angelo, let's, let's talk about the Ruben observatory, ver C Ruben, famous woman scientist, you know, galaxy guru. Now you guys the observatory, you're up way up high. You're gonna get a good look at the Southern sky. Now I know COVID slowed you guys down a bit, but no doubt. You continued to code away on the software. I know you're getting close. You gotta be super excited. Give us the update on, on the observatory and your role. >>All right. So yeah, Rubin is a state of the art observatory that, uh, is in construction on a remote mountain in Chile. And, um, with Rubin, we conduct the, uh, large survey of space and time we are going to observe the sky with, uh, eight meter optical telescope and take, uh, a thousand pictures every night with a 3.2 gig up peaks of camera. And we are going to do that for 10 years, which is the duration of the survey. >>Yeah. Amazing project. Now you, you were a doctor of philosophy, so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy, in astrophysics. So this is something that you've been working on for the better part of your career, isn't it? >>Yeah, that's that's right. Uh, about 15 years, um, I studied physics in college, then I, um, got a PhD in astronomy and, uh, I worked for about five years in another project. Um, the dark energy survey before joining rubing in 2015. >>Yeah. Impressive. So it seems like you both, you know, your organizations are looking at space from two different angles. One thing you guys both have in common of course is, is, is software. And you both use InfluxDB as part of your, your data infrastructure. How did you discover influx DB get into it? How do you use the platform? Maybe Caleb, you could start. >>Uh, yeah, absolutely. So the first company that I extensively used, uh, influx DBN was a launch startup called, uh, Astra. And we were in the process of, uh, designing our, you know, our first generation rocket there and testing the engines, pumps, everything that goes into a rocket. Uh, and when I joined the company, our data story was not, uh, very mature. We were collecting a bunch of data in LabVIEW and engineers were taking that over to MATLAB to process it. Um, and at first there, you know, that's the way that a lot of engineers and scientists are used to working. Um, and at first that was, uh, like people weren't entirely sure that that was a, um, that that needed to change, but it's something the nice thing about InfluxDB is that, you know, it's so easy to deploy. So as the, our software engineering team was able to get it deployed and, you know, up and running very quickly and then quickly also backport all of the data that we collected thus far into influx and what, uh, was amazing to see. >>And as kind of the, the super cool moment with influx is, um, when we hooked that up to Grafana Grafana as the visualization platform we used with influx, cuz it works really well with it. Uh, there was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data where they could just almost instantly easily discover data that they hadn't been able to see before and take the manual processes that they would run after a test and just throw those all in influx and have live data as tests were coming. And, you know, I saw them implementing like crazy rocket equation type stuff in influx, and it just was totally game changing for how we tested. >>So Angelo, I was explaining in my open, you know, you could, you could add a column in a traditional RDBMS and do time series, but with the volume of data that you're talking about, and the example of the Caleb just gave you, I mean, you have to have a purpose built time series database, where did you first learn about influx DB? >>Yeah, correct. So I work with the data management team, uh, and my first project was the record metrics that measured the performance of our software, uh, the software that we used to process the data. So I started implementing that in a relational database. Um, but then I realized that in fact, I was dealing with time series data and I should really use a solution built for that. And then I started looking at time series databases and I found influx B. And that was, uh, back in 2018. The another use for influx DB that I'm also interested is the visits database. Um, if you think about the observations we are moving the telescope all the time in pointing to specific directions, uh, in the Skype and taking pictures every 30 seconds. So that itself is a time series. And every point in that time series, uh, we call a visit. So we want to record the metadata about those visits and flex to, uh, that time here is going to be 10 years long, um, with about, uh, 1000 points every night. It's actually not too much data compared to other, other problems. It's, uh, really just a different, uh, time scale. >>The telescope at the Ruben observatory is like pun intended, I guess the star of the show. And I, I believe I read that it's gonna be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hub's widest camera view, which is amazing, right? That's like 40 moons in, in an image amazingly fast as well. What else can you tell us about the telescope? >>Um, this telescope, it has to move really fast and it also has to carry, uh, the primary mirror, which is an eight meter piece of glass. It's very heavy and it has to carry a camera, which has about the size of a small car. And this whole structure weighs about 300 tons for that to work. Uh, the telescope needs to be, uh, very compact and stiff. Uh, and one thing that's amazing about it's design is that the telescope, um, is 300 tons structure. It sits on a tiny film of oil, which has the diameter of, uh, human hair. And that makes an almost zero friction interface. In fact, a few people can move these enormous structure with only their hands. Uh, as you said, uh, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So each image has, uh, in diameter the size of about seven full moons. And, uh, with that, we can map the entire sky in only, uh, three days. And of course doing operations everything's, uh, controlled by software and it is automatic. Um there's a very complex piece of software, uh, called the scheduler, which is responsible for moving the telescope, um, and the camera, which is, uh, recording 15 terabytes of data every night. >>Hmm. And, and, and Angela, all this data lands in influx DB. Correct. And what are you doing with, with all that data? >>Yeah, actually not. Um, so we are using flex DB to record engineering data and metadata about the observations like telemetry events and commands from the telescope. That's a much smaller data set compared to the images, but it is still challenging because, uh, you, you have some high frequency data, uh, that the system needs to keep up and we need to, to start this data and have it around for the lifetime of the price. Mm, >>Got it. Thank you. Okay, Caleb, let's bring you back in and can tell us more about the, you got these dishwasher size satellites. You're kind of using a multi-tenant model. I think it's genius, but, but tell us about the satellites themselves. >>Yeah, absolutely. So, uh, we have in space, some satellites already that as you said, are like dishwasher, mini fridge kind of size. Um, and we're working on a bunch more that are, you know, a variety of sizes from shoebox to, I guess, a few times larger than what we have today. Uh, and it is, we do shoot to have effectively something like a multi-tenant model where, uh, we will buy a bus off the shelf. The bus is, uh, what you can kind of think of as the core piece of the satellite, almost like a motherboard or something where it's providing the power. It has the solar panels, it has some radios attached to it. Uh, it handles the attitude control, basically steers the spacecraft in orbit. And then we build also in house, what we call our payload hub, which is, has all, any customer payloads attached and our own kind of edge processing sort of capabilities built into it. >>And, uh, so we integrate that. We launch it, uh, and those things, because they're in lower orbit, they're orbiting the earth every 90 minutes. That's, you know, seven kilometers per second, which is several times faster than a speeding bullet. So we've got, we have, uh, one of the unique challenges of operating spacecraft and lower orbit is that generally you can't talk to them all the time. So we're managing these things through very brief windows of time, uh, where we get to talk to them through our ground sites, either in Antarctica or, you know, in the north pole region. >>Talk more about how you use influx DB to make sense of this data through all this tech that you're launching into space. >>We basically previously we started off when I joined the company, storing all of that as Angelo did in a regular relational database. And we found that it was, uh, so slow in the size of our data would balloon over the course of a couple days to the point where we weren't able to even store all of the data that we were getting. Uh, so we migrated to influx DB to store our time series telemetry from the spacecraft. So, you know, that's things like, uh, power level voltage, um, currents counts, whatever, whatever metadata we need to monitor about the spacecraft. We now store that in, uh, in influx DB. Uh, and that has, you know, now we can actually easily store the entire volume of data for the mission life so far without having to worry about, you know, the size bloating to an unmanageable amount. >>And we can also seamlessly query, uh, large chunks of data. Like if I need to see, you know, for example, as an operator, I might wanna see how my, uh, battery state of charge is evolving over the course of the year. I can have a plot and an influx that loads that in a fraction of a second for a year's worth of data, because it does, you know, intelligent, um, I can intelligently group the data by, uh, sliding time interval. Uh, so, you know, it's been extremely powerful for us to access the data and, you know, as time has gone on, we've gradually migrated more and more of our operating data into influx. >>You know, let's, let's talk a little bit, uh, uh, but we throw this term around a lot of, you know, data driven, a lot of companies say, oh, yes, we're data driven, but you guys really are. I mean, you' got data at the core, Caleb, what does that, what does that mean to you? >>Yeah, so, you know, I think the, and the clearest example of when I saw this be like totally game changing is what I mentioned before at Astro where our engineer's feedback loop went from, you know, a lot of kind of slow researching, digging into the data to like an instant instantaneous, almost seeing the data, making decisions based on it immediately, rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. Um, but to give another practical example, uh, as I said, we have a huge amount of data that comes down every orbit, and we need to be able to ingest all of that data almost instantaneously and provide it to the operator. And near real time, you know, about a second worth of latency is all that's acceptable for us to react to, to see what is coming down from the spacecraft and building that pipeline is challenging from a software engineering standpoint. >>Um, our primary language is Python, which isn't necessarily that fast. So what we've done is started, you know, in the, in the goal of being data driven is publish metrics on individual, uh, how individual pieces of our data processing pipeline are performing into influx as well. And we do that in production as well as in dev. Uh, so we have kind of a production monitoring, uh, flow. And what that has done is allow us to make intelligent decisions on our software development roadmap, where it makes the most sense for us to, uh, focus our development efforts in terms of improving our software efficiency. Uh, just because we have that visibility into where the real problems are. Um, it's sometimes we've found ourselves before we started doing this kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. Uh, but now, now that we're being a bit more data driven, there we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scale to, from supporting a couple satellites, to supporting many, many satellites at >>Once. Yeah. Coach. So you reduced those dead ends, maybe Angela, you could talk about what, what sort of data driven means to, to you and your teams? >>I would say that, um, having, uh, real time visibility, uh, to the telemetry data and, and metrics is, is, is crucial for us. We, we need, we need to make sure that the image that we collect with the telescope, uh, have good quality and, um, that they are within the specifications, uh, to meet our science goals. And so if they are not, uh, we want to know that as soon as possible and then, uh, start fixing problems. >>Caleb, what are your sort of event, you know, intervals like? >>So I would say that, you know, as of today on the spacecraft, the event, the, the level of timing that we deal with probably tops out at about, uh, 20 Hertz, 20 measurements per second on, uh, things like our, uh, gyroscopes, but the, you know, I think the, the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications. And I'll give an example, uh, from when I worked at, on the rocket at Astra there, our baseline data rate that we would ingest data during a test is, uh, 500 Hertz. So 500 samples per second. And in some cases we would actually, uh, need to ingest much higher rate data, even up to like 1.5 kilohertz. So, uh, extremely, extremely high precision, uh, data there where timing really matters a lot. And, uh, you know, I can, one of the really powerful things about influx is the fact that it can handle this. >>That's one of the reasons we chose it, uh, because there's times when we're looking at the results of a firing where you're zooming in, you know, I talked earlier about how on my current job, we often zoom out to look, look at a year's worth of data. You're zooming in to where your screen is preoccupied by a tiny fraction of a second. And you need to see same thing as Angela just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers. So that can be something like, Hey, I opened this valve at exactly this time and that goes, we wanna have that at, you know, micro or even nanosecond precision so that we know, okay, we saw a spike in chamber pressure at, you know, at this exact moment, was that before or after this valve open, those kind of, uh, that kind of visibility is critical in these kind of scientific, uh, applications and absolutely game changing to be able to see that in, uh, near real time and, uh, with a really easy way for engineers to be able to visualize this data themselves without having to wait for, uh, software engineers to go build it for them. >>Can the scientists do self-serve or are you, do you have to design and build all the analytics and, and queries for your >>Scientists? Well, I think that's, that's absolutely from, from my perspective, that's absolutely one of the best things about influx and what I've seen be game changing is that, uh, generally I'd say anyone can learn to use influx. Um, and honestly, most of our users might not even know they're using influx, um, because what this, the interface that we expose to them is Grafana, which is, um, a generic graphing, uh, open source graphing library that is very similar to influx own chronograph. Sure. And what it does is, uh, let it provides this, uh, almost it's a very intuitive UI for building your queries. So you choose a measurement and it shows a dropdown of available measurements. And then you choose a particular, the particular field you wanna look at. And again, that's a dropdown, so it's really easy for our users to discover. And there's kind of point and click options for doing math aggregations. You can even do like perfect kind of predictions all within Grafana, the Grafana user interface, which is really just a wrapper around the APIs and functionality of the influx provides putting >>Data in the hands of those, you know, who have the context of domain experts is, is key. Angela, is it the same situation for you? Is it self serve? >>Yeah, correct. Uh, as I mentioned before, um, we have the astronomers making their own dashboards because they know what exactly what they, they need to, to visualize. Yeah. I mean, it's all about using the right tool for the job. I think, uh, for us, when I joined the company, we weren't using influx DB and we, we were dealing with serious issues of the database growing to an incredible size extremely quickly, and being unable to like even querying short periods of data was taking on the order of seconds, which is just not possible for operations >>Guys. This has been really formative it's, it's pretty exciting to see how the edge is mountaintops, lower orbits to be space is the ultimate edge. Isn't it. I wonder if you could answer two questions to, to wrap here, you know, what comes next for you guys? Uh, and is there something that you're really excited about that, that you're working on Caleb, maybe you could go first and an Angela, you can bring us home. >>Uh, basically what's next for loft. Orbital is more, more satellites, a greater push towards infrastructure and really making, you know, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, uh, making that happen, it's extremely exciting and extremely exciting time to be in this company and to be in this industry as a whole, because there are so many interesting applications out there. So many cool ways of leveraging space that, uh, people are taking advantage of. And with, uh, companies like SpaceX and the now rapidly lowering cost, cost of launch, it's just a really exciting place to be. And we're launching more satellites. We are scaling up for some constellations and our ground system has to be improved to match. So there's a lot of, uh, improvements that we're working on to really scale up our control software, to be best in class and, uh, make it capable of handling such a large workload. So >>You guys hiring >><laugh>, we are absolutely hiring. So, uh, I would in we're we need, we have PE positions all over the company. So, uh, we need software engineers. We need people who do more aerospace, specific stuff. So, uh, absolutely. I'd encourage anyone to check out the loft orbital website, if there's, if this is at all interesting. >>All right. Angela, bring us home. >>Yeah. So what's next for us is really, uh, getting this, um, telescope working and collecting data. And when that's happen is going to be just, um, the Lu of data coming out of this camera and handling all, uh, that data is going to be really challenging. Uh, yeah. I wanna wanna be here for that. <laugh> I'm looking forward, uh, like for next year we have like an important milestone, which is our, um, commissioning camera, which is a simplified version of the, of the full camera it's going to be on sky. And so yeah, most of the system has to be working by them. >>Nice. All right, guys, you know, with that, we're gonna end it. Thank you so much, really fascinating, and thanks to influx DB for making this possible, really groundbreaking stuff, enabling value creation at the edge, you know, in the cloud and of course, beyond at the space. So really transformational work that you guys are doing. So congratulations and really appreciate the broader community. I can't wait to see what comes next from having this entire ecosystem. Now, in a moment, I'll be back to wrap up. This is Dave ante, and you're watching the cube, the leader in high tech enterprise coverage. >>Welcome Telegraph is a popular open source data collection. Agent Telegraph collects data from hundreds of systems like IOT sensors, cloud deployments, and enterprise applications. It's used by everyone from individual developers and hobbyists to large corporate teams. The Telegraph project has a very welcoming and active open source community. Learn how to get involved by visiting the Telegraph GitHub page, whether you want to contribute code, improve documentation, participate in testing, or just show what you're doing with Telegraph. We'd love to hear what you're building. >>Thanks for watching. Moving the world with influx DB made possible by influx data. I hope you learn some things and are inspired to look deeper into where time series databases might fit into your environment. If you're dealing with large and or fast data volumes, and you wanna scale cost effectively with the highest performance and you're analyzing metrics and data over time times, series databases just might be a great fit for you. Try InfluxDB out. You can start with a free cloud account by clicking on the link and the resources below. Remember all these recordings are gonna be available on demand of the cube.net and influx data.com. So check those out and poke around influx data. They are the folks behind InfluxDB and one of the leaders in the space, we hope you enjoyed the program. This is Dave Valante for the cube. We'll see you soon.
SUMMARY :
case that anyone can relate to and you can build timestamps into Now, the problem with the latter example that I just gave you is that you gotta hunt As I just explained, we have an exciting program for you today, and we're And then we bring it back here Thanks for coming on. What is the story? And, and he basically, you know, from my point of view, he invented modern time series, Yeah, I think we're, I, you know, I always forget the number, but it's something like 230 or 240 people relational database is the one database to rule the world. And then you get the data lake. So And so you get to these applications Isn't good enough when you need real time. It's like having the feature for, you know, you buy a new television, So this is a big part of how we're seeing with people saying, Hey, you know, And so you get the dynamic of, you know, of constantly instrumenting watching the What are you seeing for your, with in, with influx DB, So a lot, you know, Tesla, lucid, motors, Cola, You mentioned, you know, you think of IOT, look at the use cases there, it was proprietary And so the developer, So let's get to the developer real quick, real highlight point here is the data. So to a degree that you are moving your service, So when you bring in kind of old way, new way old way was you know, the best of the open source world. They have faster time to market cuz they're assembling way faster and they get to still is what we like to think of it. I mean systems, uh, uh, systems have consequences when you make changes. But that's where the that's where the, you know, that that Boeing or that airplane building analogy comes in So I'll have to ask you if I'm the customer. Because now I have to make these architectural decisions, as you mentioned, And so that's what you started building. And since I have a PO for you and a big check, yeah. It's not like it's, you know, it's not like it's doing every action that's above, but it's foundational to build What would you say to someone looking to do something in time series on edge? in the build business of building systems that you want 'em to be increasingly intelligent, Brian Gilmore director of IOT and emerging technology that influx day will join me. So you can focus on the Welcome to the show. Sort of, you know, riding along with them is they're successful. Now, you go back since 20 13, 14, even like five years ago that convergence of physical And I think, you know, those, especially in the OT and on the factory floor who weren't able And I think I, OT has been kind of like this thing for OT and, you know, our client libraries and then working hard to make our applications, leveraging that you guys have users in the enterprise users that IOT market mm-hmm <affirmative>, they're excited to be able to adopt and use, you know, to optimize inside the business as compared to just building mm-hmm <affirmative> so how do you support the backwards compatibility of older systems while maintaining open dozens very hard work and a lot of support, um, you know, and so by making those connections and building those ecosystems, What are some of the, um, soundbites you hear from customers when they're successful? machines that go deep into the earth to like drill tunnels for, for, you know, I personally think that's a hot area because I think if you look at AI right all of the things you need to do with that data in stream, um, before it hits your sort of central repository. So you have that whole CEO perspective, but he brought up this notion that You can start to compare asset to asset, and then you can do those things like we talked about, So in this model you have a lot of commercial operations, industrial equipment. And I think, you know, we are, we're building some technology right now. like, you know, either in low earth orbit or you know, all the way sort of on the other side of the universe. I think you bring up a good point there because one of the things that's common in the industry right now, people are talking about, I mean, I think you talked about it, uh, you know, for them just to be able to adopt the platform How do you view view that? Um, you know, and it, it allows the developer to build all of those hooks for not only data creation, There's so much data out there now. that data from point a to point B and you know, to process it correctly so that the end And, and the democratization is the benefit. allow them to just port to us, you know, directly from the applications and the languages Thanks for sharing all, all the complexities and, and IOT that you Well, thank any, any last word you wanna share No, just, I mean, please, you know, if you're, if you're gonna, if you're gonna check out influx TV, You're gonna hear more about that in the next segment, too. the moment that you can look at to kind of see the state of what's going on. And we often point to influx as a solution Tell us about loft Orbi and what you guys do to attack that problem. So that it's almost as simple as, you know, We are kind of groundbreaking in this area and we're serving, you know, a huge variety of customers and I knew, you know, I want to be in the space industry. famous woman scientist, you know, galaxy guru. And we are going to do that for 10 so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy, Um, the dark energy survey So it seems like you both, you know, your organizations are looking at space from two different angles. something the nice thing about InfluxDB is that, you know, it's so easy to deploy. And, you know, I saw them implementing like crazy rocket equation type stuff in influx, and it Um, if you think about the observations we are moving the telescope all the And I, I believe I read that it's gonna be the first of the next Uh, the telescope needs to be, And what are you doing with, compared to the images, but it is still challenging because, uh, you, you have some Okay, Caleb, let's bring you back in and can tell us more about the, you got these dishwasher and we're working on a bunch more that are, you know, a variety of sizes from shoebox sites, either in Antarctica or, you know, in the north pole region. Talk more about how you use influx DB to make sense of this data through all this tech that you're launching of data for the mission life so far without having to worry about, you know, the size bloating to an Like if I need to see, you know, for example, as an operator, I might wanna see how my, You know, let's, let's talk a little bit, uh, uh, but we throw this term around a lot of, you know, data driven, And near real time, you know, about a second worth of latency is all that's acceptable for us to react you know, in the, in the goal of being data driven is publish metrics on individual, So you reduced those dead ends, maybe Angela, you could talk about what, what sort of data driven means And so if they are not, So I would say that, you know, as of today on the spacecraft, the event, so that we know, okay, we saw a spike in chamber pressure at, you know, at this exact moment, the particular field you wanna look at. Data in the hands of those, you know, who have the context of domain experts is, issues of the database growing to an incredible size extremely quickly, and being two questions to, to wrap here, you know, what comes next for you guys? a greater push towards infrastructure and really making, you know, So, uh, we need software engineers. Angela, bring us home. And so yeah, most of the system has to be working by them. at the edge, you know, in the cloud and of course, beyond at the space. involved by visiting the Telegraph GitHub page, whether you want to contribute code, and one of the leaders in the space, we hope you enjoyed the program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Angela | PERSON | 0.99+ |
Evan | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
SpaceX | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Caleb | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
Chile | LOCATION | 0.99+ |
Brian | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Evan Kaplan | PERSON | 0.99+ |
Aaron Seley | PERSON | 0.99+ |
Angelo Fasi | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two questions | QUANTITY | 0.99+ |
Caleb McLaughlin | PERSON | 0.99+ |
40 moons | QUANTITY | 0.99+ |
two systems | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Angelo | PERSON | 0.99+ |
230 | QUANTITY | 0.99+ |
300 tons | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
500 Hertz | QUANTITY | 0.99+ |
3.2 gig | QUANTITY | 0.99+ |
15 terabytes | QUANTITY | 0.99+ |
eight meter | QUANTITY | 0.99+ |
two practitioners | QUANTITY | 0.99+ |
20 Hertz | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Paul dicks | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
first | QUANTITY | 0.99+ |
earth | LOCATION | 0.99+ |
240 people | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
HBI | ORGANIZATION | 0.99+ |
Dave LAN | PERSON | 0.99+ |
today | DATE | 0.99+ |
each image | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
cube.net | OTHER | 0.99+ |
InfluxDB | TITLE | 0.99+ |
one | QUANTITY | 0.98+ |
1000 points | QUANTITY | 0.98+ |
Analyst Power Panel: Future of Database Platforms
(upbeat music) >> Once a staid and boring business dominated by IBM, Oracle, and at the time newcomer Microsoft, along with a handful of wannabes, the database business has exploded in the past decade and has become a staple of financial excellence, customer experience, analytic advantage, competitive strategy, growth initiatives, visualizations, not to mention compliance, security, privacy and dozens of other important use cases and initiatives. And on the vendor's side of the house, we've seen the rapid ascendancy of cloud databases. Most notably from Snowflake, whose massive raises leading up to its IPO in late 2020 sparked a spate of interest and VC investment in the separation of compute and storage and all that elastic resource stuff in the cloud. The company joined AWS, Azure and Google to popularize cloud databases, which have become a linchpin of competitive strategies for technology suppliers. And if I get you to put your data in my database and in my cloud, and I keep innovating, I'm going to build a moat and achieve a hugely attractive lifetime customer value in a really amazing marginal economics dynamic that is going to fund my future. And I'll be able to sell other adjacent services, not just compute and storage, but machine learning and inference and training and all kinds of stuff, dozens of lucrative cloud offerings. Meanwhile, the database leader, Oracle has invested massive amounts of money to maintain its lead. It's building on its position as the king of mission critical workloads and making typical Oracle like claims against the competition. Most were recently just yesterday with another announcement around MySQL HeatWave. An extension of MySQL that is compatible with on-premises MySQLs and is setting new standards in price performance. We're seeing a dramatic divergence in strategies across the database spectrum. On the far left, we see Amazon with more than a dozen database offerings each with its own API and primitives. AWS is taking a right tool for the right job approach, often building on open source platforms and creating services that it offers to customers to solve very specific problems for developers. And on the other side of the line, we see Oracle, which is taking the Swiss Army Knife approach, converging database functionality, enabling analytic and transactional workloads to run in the same data store, eliminating the need to ETL, at the same time adding capabilities into its platform like automation and machine learning. Welcome to this database Power Panel. My name is Dave Vellante, and I'm so excited to bring together some of the most respected industry analyst in the community. Today we're going to assess what's happening in the market. We're going to dig into the competitive landscape and explore the future of database and database platforms and decode what it means to customers. Let me take a moment to welcome our guest analyst today. Matt Kimball is a vice president and principal analysts at Moor Insights and Strategy, Matt. He knows products, he knows industry, he's got real world IT expertise, and he's got all the angles 25 plus years of experience in all kinds of great background. Matt, welcome. Thanks very much for coming on theCUBE. Holgar Mueller, friend of theCUBE, vice president and principal analyst at Constellation Research in depth knowledge on applications, application development, knows developers. He's worked at SAP and Oracle. And then Bob Evans is Chief Content Officer and co-founder of the Acceleration Economy, founder and principle of Cloud Wars. Covers all kinds of industry topics and great insights. He's got awesome videos, these three minute hits. If you haven't seen 'em, checking them out, knows cloud companies, his Cloud Wars minutes are fantastic. And then of course, Marc Staimer is the founder of Dragon Slayer Research. A frequent contributor and guest analyst at Wikibon. He's got a wide ranging knowledge across IT products, knows technology really well, can go deep. And then of course, Ron Westfall, Senior Analyst and Director Research Director at Futurum Research, great all around product trends knowledge. Can take, you know, technical dives and really understands competitive angles, knows Redshift, Snowflake, and many others. Gents, thanks so much for taking the time to join us in theCube today. It's great to have you on, good to see you. >> Good to be here, thanks for having us. >> Thanks, Dave. >> All right, let's start with an around the horn and briefly, if each of you would describe, you know, anything I missed in your areas of expertise and then you answer the following question, how would you describe the state of the database, state of platform market today? Matt Kimball, please start. >> Oh, I hate going first, but that it's okay. How would I describe the world today? I would just in one sentence, I would say, I'm glad I'm not in IT anymore, right? So, you know, it is a complex and dangerous world out there. And I don't envy IT folks I'd have to support, you know, these modernization and transformation efforts that are going on within the enterprise. It used to be, you mentioned it, Dave, you would argue about IBM versus Oracle versus this newcomer in the database space called Microsoft. And don't forget Sybase back in the day, but you know, now it's not just, which SQL vendor am I going to go with? It's all of these different, divergent data types that have to be taken, they have to be merged together, synthesized. And somehow I have to do that cleanly and use this to drive strategic decisions for my business. That is not easy. So, you know, you have to look at it from the perspective of the business user. It's great for them because as a DevOps person, or as an analyst, I have so much flexibility and I have this thing called the cloud now where I can go get services immediately. As an IT person or a DBA, I am calling up prevention hotlines 24 hours a day, because I don't know how I'm going to be able to support the business. And as an Oracle or as an Oracle or a Microsoft or some of the cloud providers and cloud databases out there, I'm licking my chops because, you know, my market is expanding and expanding every day. >> Great, thank you for that, Matt. Holgar, how do you see the world these days? You always have a good perspective on things, share with us. >> Well, I think it's the best time to be in IT, I'm not sure what Matt is talking about. (laughing) It's easier than ever, right? The direction is going to cloud. Kubernetes has won, Google has the best AI for now, right? So things are easier than ever before. You made commitments for five plus years on hardware, networking and so on premise, and I got gray hair about worrying it was the wrong decision. No, just kidding. But you kind of both sides, just to be controversial, make it interesting, right. So yeah, no, I think the interesting thing specifically with databases, right? We have this big suite versus best of breed, right? Obviously innovation, like you mentioned with Snowflake and others happening in the cloud, the cloud vendors server, where to save of their databases. And then we have one of the few survivors of the old guard as Evans likes to call them is Oracle who's doing well, both their traditional database. And now, which is really interesting, remarkable from that because Oracle it was always the power of one, have one database, add more to it, make it what I call the universal database. And now this new HeatWave offering is coming and MySQL open source side. So they're getting the second (indistinct) right? So it's interesting that older players, traditional players who still are in the market are diversifying their offerings. Something we don't see so much from the traditional tools from Oracle on the Microsoft side or the IBM side these days. >> Great, thank you Holgar. Bob Evans, you've covered this business for a while. You've worked at, you know, a number of different outlets and companies and you cover the competition, how do you see things? >> Dave, you know, the other angle to look at this from is from the customer side, right? You got now CEOs who are any sort of business across all sorts of industries, and they understand that their future success is going to be dependent on their ability to become a digital company, to understand data, to use it the right way. So as you outline Dave, I think in your intro there, it is a fantastic time to be in the database business. And I think we've got a lot of new buyers and influencers coming in. They don't know all this history about IBM and Microsoft and Oracle and you know, whoever else. So I think they're going to take a long, hard look, Dave, at some of these results and who is able to help these companies not serve up the best technology, but who's going to be able to help their business move into the digital future. So it's a fascinating time now from every perspective. >> Great points, Bob. I mean, digital transformation has gone from buzzword to imperative. Mr. Staimer, how do you see things? >> I see things a little bit differently than my peers here in that I see the database market being segmented. There's all the different kinds of databases that people are looking at for different kinds of data, and then there is databases in the cloud. And so database as cloud service, I view very differently than databases because the traditional way of implementing a database is changing and it's changing rapidly. So one of the premises that you stated earlier on was that you viewed Oracle as a database company. I don't view Oracle as a database company anymore. I view Oracle as a cloud company that happens to have a significant expertise and specialty in databases, and they still sell database software in the traditional way, but ultimately they're a cloud company. So database cloud services from my point of view is a very distinct market from databases. >> Okay, well, you gave us some good meat on the bone to talk about that. Last but not least-- >> Dave did Marc, just say Oracle's a cloud company? >> Yeah. (laughing) Take away the database, it would be interesting to have that discussion, but let's let Ron jump in here. Ron, give us your take. >> That's a great segue. I think it's truly the era of the cloud database, that's something that's rising. And the key trends that come with it include for example, elastic scaling. That is the ability to scale on demand, to right size workloads according to customer requirements. And also I think it's going to increase the prioritization for high availability. That is the player who can provide the highest availability is going to have, I think, a great deal of success in this emerging market. And also I anticipate that there will be more consolidation across platforms in order to enable cost savings for customers, and that's something that's always going to be important. And I think we'll see more of that over the horizon. And then finally security, security will be more important than ever. We've seen a spike (indistinct), we certainly have seen geopolitical originated cybersecurity concerns. And as a result, I see database security becoming all the more important. >> Great, thank you. Okay, let me share some data with you guys. I'm going to throw this at you and see what you think. We have this awesome data partner called Enterprise Technology Research, ETR. They do these quarterly surveys and each period with dozens of industry segments, they track clients spending, customer spending. And this is the database, data warehouse sector okay so it's taxonomy, so it's not perfect, but it's a big kind of chunk. They essentially ask customers within a category and buy a specific vendor, you're spending more or less on the platform? And then they subtract the lesses from the mores and they derive a metric called net score. It's like NPS, it's a measure of spending velocity. It's more complicated and granular than that, but that's the basis and that's the vertical axis. The horizontal axis is what they call market share, it's not like IDC market share, it's just pervasiveness in the data set. And so there are a couple of things that stand out here and that we can use as reference point. The first is the momentum of Snowflake. They've been off the charts for many, many, for over two years now, anything above that dotted red line, that 40%, is considered by ETR to be highly elevated and Snowflake's even way above that. And I think it's probably not sustainable. We're going to see in the next April survey, next month from those guys, when it comes out. And then you see AWS and Microsoft, they're really pervasive on the horizontal axis and highly elevated, Google falls behind them. And then you got a number of well funded players. You got Cockroach Labs, Mongo, Redis, MariaDB, which of course is a fork on MySQL started almost as protest at Oracle when they acquired Sun and they got MySQL and you can see the number of others. Now Oracle who's the leading database player, despite what Marc Staimer says, we know, (laughs) and they're a cloud player (laughing) who happens to be a leading database player. They dominate in the mission critical space, we know that they're the king of that sector, but you can see here that they're kind of legacy, right? They've been around a long time, they get a big install base. So they don't have the spending momentum on the vertical axis. Now remember this is, just really this doesn't capture spending levels, so that understates Oracle but nonetheless. So it's not a complete picture like SAP for instance is not in here, no Hana. I think people are actually buying it, but it doesn't show up here, (laughs) but it does give an indication of momentum and presence. So Bob Evans, I'm going to start with you. You've commented on many of these companies, you know, what does this data tell you? >> Yeah, you know, Dave, I think all these compilations of things like that are interesting, and that folks at ETR do some good work, but I think as you said, it's a snapshot sort of a two-dimensional thing of a rapidly changing, three dimensional world. You know, the incidents at which some of these companies are mentioned versus the volume that happens. I think it's, you know, with Oracle and I'm not going to declare my religious affiliation, either as cloud company or database company, you know, they're all of those things and more, and I think some of our old language of how we classify companies is just not relevant anymore. But I want to ask too something in here, the autonomous database from Oracle, nobody else has done that. So either Oracle is crazy, they've tried out a technology that nobody other than them is interested in, or they're onto something that nobody else can match. So to me, Dave, within Oracle, trying to identify how they're doing there, I would watch autonomous database growth too, because right, it's either going to be a big plan and it breaks through, or it's going to be caught behind. And the Snowflake phenomenon as you mentioned, that is a rare, rare bird who comes up and can grow 100% at a billion dollar revenue level like that. So now they've had a chance to come in, scare the crap out of everybody, rock the market with something totally new, the data cloud. Will the bigger companies be able to catch up and offer a compelling alternative, or is Snowflake going to continue to be this outlier. It's a fascinating time. >> Really, interesting points there. Holgar, I want to ask you, I mean, I've talked to certainly I'm sure you guys have too, the founders of Snowflake that came out of Oracle and they actually, they don't apologize. They say, "Hey, we not going to do all that complicated stuff that Oracle does, we were trying to keep it real simple." But at the same time, you know, they don't do sophisticated workload management. They don't do complex joints. They're kind of relying on the ecosystems. So when you look at the data like this and the various momentums, and we talked about the diverging strategies, what does this say to you? >> Well, it is a great point. And I think Snowflake is an example how the cloud can turbo charge a well understood concept in this case, the data warehouse, right? You move that and you find steroids and you see like for some players who've been big in data warehouse, like Sentara Data, as an example, here in San Diego, what could have been for them right in that part. The interesting thing, the problem though is the cloud hides a lot of complexity too, which you can scale really well as you attract lots of customers to go there. And you don't have to build things like what Bob said, right? One of the fascinating things, right, nobody's answering Oracle on the autonomous database. I don't think is that they cannot, they just have different priorities or the database is not such a priority. I would dare to say that it's for IBM and Microsoft right now at the moment. And the cloud vendors, you just hide that right through scripts and through scale because you support thousands of customers and you can deal with a little more complexity, right? It's not against them. Whereas if you have to run it yourself, very different story, right? You want to have the autonomous parts, you want to have the powerful tools to do things. >> Thank you. And so Matt, I want to go to you, you've set up front, you know, it's just complicated if you're in IT, it's a complicated situation and you've been on the customer side. And if you're a buyer, it's obviously, it's like Holgar said, "Cloud's supposed to make this stuff easier, but the simpler it gets the more complicated gets." So where do you place your bets? Or I guess more importantly, how do you decide where to place your bets? >> Yeah, it's a good question. And to what Bob and Holgar said, you know, the around autonomous database, I think, you know, part of, as I, you know, play kind of armchair psychologist, if you will, corporate psychologists, I look at what Oracle is doing and, you know, databases where they've made their mark and it's kind of, that's their strong position, right? So it makes sense if you're making an entry into this cloud and you really want to kind of build momentum, you go with what you're good at, right? So that's kind of the strength of Oracle. Let's put a lot of focus on that. They do a lot more than database, don't get me wrong, but you know, I'm going to short my strength and then kind of pivot from there. With regards to, you know, what IT looks at and what I would look at you know as an IT director or somebody who is, you know, trying to consume services from these different cloud providers. First and foremost, I go with what I know, right? Let's not forget IT is a conservative group. And when we look at, you know, all the different permutations of database types out there, SQL, NoSQL, all the different types of NoSQL, those are largely being deployed by business users that are looking for agility or businesses that are looking for agility. You know, the reason why MongoDB is so popular is because of DevOps, right? It's a great platform to develop on and that's where it kind of gained its traction. But as an IT person, I want to go with what I know, where my muscle memory is, and that's my first position. And so as I evaluate different cloud service providers and cloud databases, I look for, you know, what I know and what I've invested in and where my muscle memory is. Is there enough there and do I have enough belief that that company or that service is going to be able to take me to, you know, where I see my organization in five years from a data management perspective, from a business perspective, are they going to be there? And if they are, then I'm a little bit more willing to make that investment, but it is, you know, if I'm kind of going in this blind or if I'm cloud native, you know, that's where the Snowflakes of the world become very attractive to me. >> Thank you. So Marc, I asked Andy Jackson in theCube one time, you have all these, you know, data stores and different APIs and primitives and you know, very granular, what's the strategy there? And he said, "Hey, that allows us as the market changes, it allows us to be more flexible. If we start building abstractions layers, it's harder for us." I think also it was not a good time to market advantage, but let me ask you, I described earlier on that spectrum from AWS to Oracle. We just saw yesterday, Oracle announced, I think the third major enhancement in like 15 months to MySQL HeatWave, what do you make of that announcement? How do you think it impacts the competitive landscape, particularly as it relates to, you know, converging transaction and analytics, eliminating ELT, I know you have some thoughts on this. >> So let me back up for a second and defend my cloud statement about Oracle for a moment. (laughing) AWS did a great job in developing the cloud market in general and everything in the cloud market. I mean, I give them lots of kudos on that. And a lot of what they did is they took open source software and they rent it to people who use their cloud. So I give 'em lots of credit, they dominate the market. Oracle was late to the cloud market. In fact, they actually poo-pooed it initially, if you look at some of Larry Ellison's statements, they said, "Oh, it's never going to take off." And then they did 180 turn, and they said, "Oh, we're going to embrace the cloud." And they really have, but when you're late to a market, you've got to be compelling. And this ties into the announcement yesterday, but let's deal with this compelling. To be compelling from a user point of view, you got to be twice as fast, offer twice as much functionality, at half the cost. That's generally what compelling is that you're going to capture market share from the leaders who established the market. It's very difficult to capture market share in a new market for yourself. And you're right. I mean, Bob was correct on this and Holgar and Matt in which you look at Oracle, and they did a great job of leveraging their database to move into this market, give 'em lots of kudos for that too. But yesterday they announced, as you said, the third innovation release and the pace is just amazing of what they're doing on these releases on HeatWave that ties together initially MySQL with an integrated builtin analytics engine, so a data warehouse built in. And then they added automation with autopilot, and now they've added machine learning to it, and it's all in the same service. It's not something you can buy and put on your premise unless you buy their cloud customers stuff. But generally it's a cloud offering, so it's compellingly better as far as the integration. You don't buy multiple services, you buy one and it's lower cost than any of the other services, but more importantly, it's faster, which again, give 'em credit for, they have more integration of a product. They can tie things together in a way that nobody else does. There's no additional services, ETL services like Glue and AWS. So from that perspective, they're getting better performance, fewer services, lower cost. Hmm, they're aiming at the compelling side again. So from a customer point of view it's compelling. Matt, you wanted to say something there. >> Yeah, I want to kind of, on what you just said there Marc, and this is something I've found really interesting, you know. The traditional way that you look at software and, you know, purchasing software and IT is, you look at either best of breed solutions and you have to work on the backend to integrate them all and make them all work well. And generally, you know, the big hit against the, you know, we have one integrated offering is that, you lose capability or you lose depth of features, right. And to what you were saying, you know, that's the thing I found interesting about what Oracle is doing is they're building in depth as they kind of, you know, build that service. It's not like you're losing a lot of capabilities, because you're going to one integrated service versus having to use A versus B versus C, and I love that idea. >> You're right. Yeah, not only you're not losing, but you're gaining functionality that you can't get by integrating a lot of these. I mean, I can take Snowflake and integrate it in with machine learning, but I also have to integrate in with a transactional database. So I've got to have connectors between all of this, which means I'm adding time. And what it comes down to at the end of the day is expertise, effort, time, and cost. And so what I see the difference from the Oracle announcements is they're aiming at reducing all of that by increasing performance as well. Correct me if I'm wrong on that but that's what I saw at the announcement yesterday. >> You know, Marc, one thing though Marc, it's funny you say that because I started out saying, you know, I'm glad I'm not 19 anymore. And the reason is because of exactly what you said, it's almost like there's a pseudo level of witchcraft that's required to support the modern data environment right in the enterprise. And I need simpler faster, better. That's what I need, you know, I am no longer wearing pocket protectors. I have turned from, you know, break, fix kind of person, to you know, business consultant. And I need that point and click simplicity, but I can't sacrifice, you know, a depth of features of functionality on the backend as I play that consultancy role. >> So, Ron, I want to bring in Ron, you know, it's funny. So Matt, you mentioned Mongo, I often and say, if Oracle mentions you, you're on the map. We saw them yesterday Ron, (laughing) they hammered RedShifts auto ML, they took swipes at Snowflake, a little bit of BigQuery. What were your thoughts on that? Do you agree with what these guys are saying in terms of HeatWaves capabilities? >> Yes, Dave, I think that's an excellent question. And fundamentally I do agree. And the question is why, and I think it's important to know that all of the Oracle data is backed by the fact that they're using benchmarks. For example, all of the ML and all of the TPC benchmarks, including all the scripts, all the configs and all the detail are posted on GitHub. So anybody can look at these results and they're fully transparent and replicate themselves. If you don't agree with this data, then by all means challenge it. And we have not really seen that in all of the new updates in HeatWave over the last 15 months. And as a result, when it comes to these, you know, fundamentals in looking at the competitive landscape, which I think gives validity to outcomes such as Oracle being able to deliver 4.8 times better price performance than Redshift. As well as for example, 14.4 better price performance than Snowflake, and also 12.9 better price performance than BigQuery. And so that is, you know, looking at the quantitative side of things. But again, I think, you know, to Marc's point and to Matt's point, there are also qualitative aspects that clearly differentiate the Oracle proposition, from my perspective. For example now the MySQL HeatWave ML capabilities are native, they're built in, and they also support things such as completion criteria. And as a result, that enables them to show that hey, when you're using Redshift ML for example, you're having to also use their SageMaker tool and it's running on a meter. And so, you know, nobody really wants to be running on a meter when, you know, executing these incredibly complex tasks. And likewise, when it comes to Snowflake, they have to use a third party capability. They don't have the built in, it's not native. So the user, to the point that he's having to spend more time and it increases complexity to use auto ML capabilities across the Snowflake platform. And also, I think it also applies to other important features such as data sampling, for example, with the HeatWave ML, it's intelligent sampling that's being implemented. Whereas in contrast, we're seeing Redshift using random sampling. And again, Snowflake, you're having to use a third party library in order to achieve the same capabilities. So I think the differentiation is crystal clear. I think it definitely is refreshing. It's showing that this is where true value can be assigned. And if you don't agree with it, by all means challenge the data. >> Yeah, I want to come to the benchmarks in a minute. By the way, you know, the gentleman who's the Oracle's architect, he did a great job on the call yesterday explaining what you have to do. I thought that was quite impressive. But Bob, I know you follow the financials pretty closely and on the earnings call earlier this month, Ellison said that, "We're going to see HeatWave on AWS." And the skeptic in me said, oh, they must not be getting people to come to OCI. And then they, you remember this chart they showed yesterday that showed the growth of HeatWave on OCI. But of course there was no data on there, it was just sort of, you know, lines up and to the right. So what do you guys think of that? (Marc laughs) Does it signal Bob, desperation by Oracle that they can't get traction on OCI, or is it just really a smart tame expansion move? What do you think? >> Yeah, Dave, that's a great question. You know, along the way there, and you know, just inside of that was something that said Ellison said on earnings call that spoke to a different sort of philosophy or mindset, almost Marc, where he said, "We're going to make this multicloud," right? With a lot of their other cloud stuff, if you wanted to use any of Oracle's cloud software, you had to use Oracle's infrastructure, OCI, there was no other way out of it. But this one, but I thought it was a classic Ellison line. He said, "Well, we're making this available on AWS. We're making this available, you know, on Snowflake because we're going after those users. And once they see what can be done here." So he's looking at it, I guess you could say, it's a concession to customers because they want multi-cloud. The other way to look at it, it's a hunting expedition and it's one of those uniquely I think Oracle ways. He said up front, right, he doesn't say, "Well, there's a big market, there's a lot for everybody, we just want on our slice." Said, "No, we are going after Amazon, we're going after Redshift, we're going after Aurora. We're going after these users of Snowflake and so on." And I think it's really fairly refreshing these days to hear somebody say that, because now if I'm a buyer, I can look at that and say, you know, to Marc's point, "Do they measure up, do they crack that threshold ceiling? Or is this just going to be more pain than a few dollars savings is worth?" But you look at those numbers that Ron pointed out and that we all saw in that chart. I've never seen Dave, anything like that. In a substantive market, a new player coming in here, and being able to establish differences that are four, seven, eight, 10, 12 times better than competition. And as new buyers look at that, they're going to say, "What the hell are we doing paying, you know, five times more to get a poor result? What's going on here?" So I think this is going to rattle people and force a harder, closer look at what these alternatives are. >> I wonder if the guy, thank you. Let's just skip ahead of the benchmarks guys, bring up the next slide, let's skip ahead a little bit here, which talks to the benchmarks and the benchmarking if we can. You know, David Floyer, the sort of semiretired, you know, Wikibon analyst said, "Dave, this is going to force Amazon and others, Snowflake," he said, "To rethink actually how they architect databases." And this is kind of a compilation of some of the data that they shared. They went after Redshift mostly, (laughs) but also, you know, as I say, Snowflake, BigQuery. And, like I said, you can always tell which companies are doing well, 'cause Oracle will come after you, but they're on the radar here. (laughing) Holgar should we take this stuff seriously? I mean, or is it, you know, a grain salt? What are your thoughts here? >> I think you have to take it seriously. I mean, that's a great question, great point on that. Because like Ron said, "If there's a flaw in a benchmark, we know this database traditionally, right?" If anybody came up that, everybody will be, "Oh, you put the wrong benchmark, it wasn't audited right, let us do it again," and so on. We don't see this happening, right? So kudos to Oracle to be aggressive, differentiated, and seem to having impeccable benchmarks. But what we really see, I think in my view is that the classic and we can talk about this in 100 years, right? Is the suite versus best of breed, right? And the key question of the suite, because the suite's always slower, right? No matter at which level of the stack, you have the suite, then the best of breed that will come up with something new, use a cloud, put the data warehouse on steroids and so on. The important thing is that you have to assess as a buyer what is the speed of my suite vendor. And that's what you guys mentioned before as well, right? Marc said that and so on, "Like, this is a third release in one year of the HeatWave team, right?" So everybody in the database open source Marc, and there's so many MySQL spinoffs to certain point is put on shine on the speed of (indistinct) team, putting out fundamental changes. And the beauty of that is right, is so inherent to the Oracle value proposition. Larry's vision of building the IBM of the 21st century, right from the Silicon, from the chip all the way across the seven stacks to the click of the user. And that what makes the database what Rob was saying, "Tied to the OCI infrastructure," because designed for that, it runs uniquely better for that, that's why we see the cross connect to Microsoft. HeatWave so it's different, right? Because HeatWave runs on cheap hardware, right? Which is the breadth and butter 886 scale of any cloud provider, right? So Oracle probably needs it to scale OCI in a different category, not the expensive side, but also allow us to do what we said before, the multicloud capability, which ultimately CIOs really want, because data gravity is real, you want to operate where that is. If you have a fast, innovative offering, which gives you more functionality and the R and D speed is really impressive for the space, puts away bad results, then it's a good bet to look at. >> Yeah, so you're saying, that we versus best of breed. I just want to sort of play back then Marc a comment. That suite versus best of breed, there's always been that trade off. If I understand you Holgar you're saying that somehow Oracle has magically cut through that trade off and they're giving you the best of both. >> It's the developing velocity, right? The provision of important features, which matter to buyers of the suite vendor, eclipses the best of breed vendor, then the best of breed vendor is in the hell of a potential job. >> Yeah, go ahead Marc. >> Yeah and I want to add on what Holgar just said there. I mean the worst job in the data center is data movement, moving the data sucks. I don't care who you are, nobody likes it. You never get any kudos for doing it well, and you always get the ah craps, when things go wrong. So it's in- >> In the data center Marc all the time across data centers, across cloud. That's where the bleeding comes. >> It's right, you get beat up all the time. So nobody likes to move data, ever. So what you're looking at with what they announce with HeatWave and what I love about HeatWave is it doesn't matter when you started with it, you get all the additional features they announce it's part of the service, all the time. But they don't have to move any of the data. You want to analyze the data that's in your transactional, MySQL database, it's there. You want to do machine learning models, it's there, there's no data movement. The data movement is the key thing, and they just eliminate that, in so many ways. And the other thing I wanted to talk about is on the benchmarks. As great as those benchmarks are, they're really conservative 'cause they're underestimating the cost of that data movement. The ETLs, the other services, everything's left out. It's just comparing HeatWave, MySQL cloud service with HeatWave versus Redshift, not Redshift and Aurora and Glue, Redshift and Redshift ML and SageMaker, it's just Redshift. >> Yeah, so what you're saying is what Oracle's doing is saying, "Okay, we're going to run MySQL HeatWave benchmarks on analytics against Redshift, and then we're going to run 'em in transaction against Aurora." >> Right. >> But if you really had to look at what you would have to do with the ETL, you'd have to buy two different data stores and all the infrastructure around that, and that goes away so. >> Due to the nature of the competition, they're running narrow best of breed benchmarks. There is no suite level benchmark (Dave laughs) because they created something new. >> Well that's you're the earlier point they're beating best of breed with a suite. So that's, I guess to Floyer's earlier point, "That's going to shake things up." But I want to come back to Bob Evans, 'cause I want to tap your Cloud Wars mojo before we wrap. And line up the horses, you got AWS, you got Microsoft, Google and Oracle. Now they all own their own cloud. Snowflake, Mongo, Couchbase, Redis, Cockroach by the way they're all doing very well. They run in the cloud as do many others. I think you guys all saw the Andreessen, you know, commentary from Sarah Wang and company, to talk about the cost of goods sold impact of cloud. So owning your own cloud has to be an advantage because other guys like Snowflake have to pay cloud vendors and negotiate down versus having the whole enchilada, Safra Catz's dream. Bob, how do you think this is going to impact the market long term? >> Well, Dave, that's a great question about, you know, how this is all going to play out. If I could mention three things, one, Frank Slootman has done a fantastic job with Snowflake. Really good company before he got there, but since he's been there, the growth mindset, the discipline, the rigor and the phenomenon of what Snowflake has done has forced all these bigger companies to really accelerate what they're doing. And again, it's an example of how this intense competition makes all the different cloud vendors better and it provides enormous value to customers. Second thing I wanted to mention here was look at the Adam Selipsky effect at AWS, took over in the middle of May, and in Q2, Q3, Q4, AWS's growth rate accelerated. And in each of those three quotas, they grew faster than Microsoft's cloud, which has not happened in two or three years, so they're closing the gap on Microsoft. The third thing, Dave, in this, you know, incredibly intense competitive nature here, look at Larry Ellison, right? He's got his, you know, the product that for the last two or three years, he said, "It's going to help determine the future of the company, autonomous database." You would think he's the last person in the world who's going to bring in, you know, in some ways another database to think about there, but he has put, you know, his whole effort and energy behind this. The investments Oracle's made, he's riding this horse really hard. So it's not just a technology achievement, but it's also an investment priority for Oracle going forward. And I think it's going to form a lot of how they position themselves to this new breed of buyer with a new type of need and expectations from IT. So I just think the next two or three years are going to be fantastic for people who are lucky enough to get to do the sorts of things that we do. >> You know, it's a great point you made about AWS. Back in 2018 Q3, they were doing about 7.4 billion a quarter and they were growing in the mid forties. They dropped down to like 29% Q4, 2020, I'm looking at the data now. They popped back up last quarter, last reported quarter to 40%, that is 17.8 billion, so they more doubled and they accelerated their growth rate. (laughs) So maybe that pretends, people are concerned about Snowflake right now decelerating growth. You know, maybe that's going to be different. By the way, I think Snowflake has a different strategy, the whole data cloud thing, data sharing. They're not trying to necessarily take Oracle head on, which is going to make this next 10 years, really interesting. All right, we got to go, last question. 30 seconds or less, what can we expect from the future of data platforms? Matt, please start. >> I have to go first again? You're killing me, Dave. (laughing) In the next few years, I think you're going to see the major players continue to meet customers where they are, right. Every organization, every environment is, you know, kind of, we use these words bespoke in Snowflake, pardon the pun, but Snowflakes, right. But you know, they're all opinionated and unique and what's great as an IT person is, you know, there is a service for me regardless of where I am on my journey, in my data management journey. I think you're going to continue to see with regards specifically to Oracle, I think you're going to see the company continue along this path of being all things to all people, if you will, or all organizations without sacrificing, you know, kind of richness of features and sacrificing who they are, right. Look, they are the data kings, right? I mean, they've been a database leader for an awful long time. I don't see that going away any time soon and I love the innovative spirit they've brought in with HeatWave. >> All right, great thank you. Okay, 30 seconds, Holgar go. >> Yeah, I mean, the interesting thing that we see is really that trend to autonomous as Oracle calls or self-driving software, right? So the database will have to do more things than just store the data and support the DVA. It will have to show it can wide insights, the whole upside, it will be able to show to one machine learning. We haven't really talked about that. How in just exciting what kind of use case we can get of machine learning running real time on data as it changes, right? So, which is part of the E5 announcement, right? So we'll see more of that self-driving nature in the database space. And because you said we can promote it, right. Check out my report about HeatWave latest release where I post in oracle.com. >> Great, thank you for that. And Bob Evans, please. You're great at quick hits, hit us. >> Dave, thanks. I really enjoyed getting to hear everybody's opinion here today and I think what's going to happen too. I think there's a new generation of buyers, a new set of CXO influencers in here. And I think what Oracle's done with this, MySQL HeatWave, those benchmarks that Ron talked about so eloquently here that is going to become something that forces other companies, not just try to get incrementally better. I think we're going to see a massive new wave of innovation to try to play catch up. So I really take my hat off to Oracle's achievement from going to, push everybody to be better. >> Excellent. Marc Staimer, what do you say? >> Sure, I'm going to leverage off of something Matt said earlier, "Those companies that are going to develop faster, cheaper, simpler products that are going to solve customer problems, IT problems are the ones that are going to succeed, or the ones who are going to grow. The one who are just focused on the technology are going to fall by the wayside." So those who can solve more problems, do it more elegantly and do it for less money are going to do great. So Oracle's going down that path today, Snowflake's going down that path. They're trying to do more integration with third party, but as a result, aiming at that simpler, faster, cheaper mentality is where you're going to continue to see this market go. >> Amen brother Marc. >> Thank you, Ron Westfall, we'll give you the last word, bring us home. >> Well, thank you. And I'm loving it. I see a wave of innovation across the entire cloud database ecosystem and Oracle is fueling it. We are seeing it, with the native integration of auto ML capabilities, elastic scaling, lower entry price points, et cetera. And this is just going to be great news for buyers, but also developers and increased use of open APIs. And so I think that is really the key takeaways. Just we're going to see a lot of great innovation on the horizon here. >> Guys, fantastic insights, one of the best power panel as I've ever done. Love to have you back. Thanks so much for coming on today. >> Great job, Dave, thank you. >> All right, and thank you for watching. This is Dave Vellante for theCube and we'll see you next time. (soft music)
SUMMARY :
and co-founder of the and then you answer And don't forget Sybase back in the day, the world these days? and others happening in the cloud, and you cover the competition, and Oracle and you know, whoever else. Mr. Staimer, how do you see things? in that I see the database some good meat on the bone Take away the database, That is the ability to scale on demand, and they got MySQL and you I think it's, you know, and the various momentums, and Microsoft right now at the moment. So where do you place your bets? And to what Bob and Holgar said, you know, and you know, very granular, and everything in the cloud market. And to what you were saying, you know, functionality that you can't get to you know, business consultant. you know, it's funny. and all of the TPC benchmarks, By the way, you know, and you know, just inside of that was of some of the data that they shared. the stack, you have the suite, and they're giving you the best of both. of the suite vendor, and you always get the ah In the data center Marc all the time And the other thing I wanted to talk about and then we're going to run 'em and all the infrastructure around that, Due to the nature of the competition, I think you guys all saw the Andreessen, And I think it's going to form I'm looking at the data now. and I love the innovative All right, great thank you. and support the DVA. Great, thank you for that. And I think what Oracle's done Marc Staimer, what do you say? or the ones who are going to grow. we'll give you the last And this is just going to Love to have you back. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ron Westfall | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Marc Staimer | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Marc | PERSON | 0.99+ |
Ellison | PERSON | 0.99+ |
Bob Evans | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
Holgar Mueller | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Ron | PERSON | 0.99+ |
Staimer | PERSON | 0.99+ |
Andy Jackson | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Matt Kimball | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
100% | QUANTITY | 0.99+ |
Sarah Wang | PERSON | 0.99+ |
San Diego | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rob | PERSON | 0.99+ |
Analyst Predictions 2022: The Future of Data Management
[Music] in the 2010s organizations became keenly aware that data would become the key ingredient in driving competitive advantage differentiation and growth but to this day putting data to work remains a difficult challenge for many if not most organizations now as the cloud matures it has become a game changer for data practitioners by making cheap storage and massive processing power readily accessible we've also seen better tooling in the form of data workflows streaming machine intelligence ai developer tools security observability automation new databases and the like these innovations they accelerate data proficiency but at the same time they had complexity for practitioners data lakes data hubs data warehouses data marts data fabrics data meshes data catalogs data oceans are forming they're evolving and exploding onto the scene so in an effort to bring perspective to the sea of optionality we've brought together the brightest minds in the data analyst community to discuss how data management is morphing and what practitioners should expect in 2022 and beyond hello everyone my name is dave vellante with the cube and i'd like to welcome you to a special cube presentation analyst predictions 2022 the future of data management we've gathered six of the best analysts in data and data management who are going to present and discuss their top predictions and trends for 2022 in the first half of this decade let me introduce our six power panelists sanjeev mohan is former gartner analyst and principal at sanjamo tony bear is principal at db insight carl olufsen is well-known research vice president with idc dave meninger is senior vice president and research director at ventana research brad shimon chief analyst at ai platforms analytics and data management at omnia and doug henschen vice president and principal analyst at constellation research gentlemen welcome to the program and thanks for coming on thecube today great to be here thank you all right here's the format we're going to use i as moderator are going to call on each analyst separately who then will deliver their prediction or mega trend and then in the interest of time management and pace two analysts will have the opportunity to comment if we have more time we'll elongate it but let's get started right away sanjeev mohan please kick it off you want to talk about governance go ahead sir thank you dave i i believe that data governance which we've been talking about for many years is now not only going to be mainstream it's going to be table stakes and all the things that you mentioned you know with data oceans data lakes lake houses data fabric meshes the common glue is metadata if we don't understand what data we have and we are governing it there is no way we can manage it so we saw informatica when public last year after a hiatus of six years i've i'm predicting that this year we see some more companies go public uh my bet is on colibra most likely and maybe alation we'll see go public this year we we i'm also predicting that the scope of data governance is going to expand beyond just data it's not just data and reports we are going to see more transformations like spark jaws python even airflow we're going to see more of streaming data so from kafka schema registry for example we will see ai models become part of this whole governance suite so the governance suite is going to be very comprehensive very detailed lineage impact analysis and then even expand into data quality we already seen that happen with some of the tools where they are buying these smaller companies and bringing in data quality monitoring and integrating it with metadata management data catalogs also data access governance so these so what we are going to see is that once the data governance platforms become the key entry point into these modern architectures i'm predicting that the usage the number of users of a data catalog is going to exceed that of a bi tool that will take time and we already seen that that trajectory right now if you look at bi tools i would say there are 100 users to a bi tool to one data catalog and i i see that evening out over a period of time and at some point data catalogs will really become you know the main way for us to access data data catalog will help us visualize data but if we want to do more in-depth analysis it'll be the jumping-off point into the bi tool the data science tool and and that is that is the journey i see for the data governance products excellent thank you some comments maybe maybe doug a lot a lot of things to weigh in on there maybe you could comment yeah sanjeev i think you're spot on a lot of the trends uh the one disagreement i think it's it's really still far from mainstream as you say we've been talking about this for years it's like god motherhood apple pie everyone agrees it's important but too few organizations are really practicing good governance because it's hard and because the incentives have been lacking i think one thing that deserves uh mention in this context is uh esg mandates and guidelines these are environmental social and governance regs and guidelines we've seen the environmental rags and guidelines imposed in industries particularly the carbon intensive industries we've seen the social mandates particularly diversity imposed on suppliers by companies that are leading on this topic we've seen governance guidelines now being imposed by banks and investors so these esgs are presenting new carrots and sticks and it's going to demand more solid data it's going to demand more detailed reporting and solid reporting tighter governance but we're still far from mainstream adoption we have a lot of uh you know best of breed niche players in the space i think the signs that it's going to be more mainstream are starting with things like azure purview google dataplex the big cloud platform uh players seem to be uh upping the ante and and addressing starting to address governance excellent thank you doug brad i wonder if you could chime in as well yeah i would love to be a believer in data catalogs um but uh to doug's point i think that it's going to take some more pressure for for that to happen i recall metadata being something every enterprise thought they were going to get under control when we were working on service oriented architecture back in the 90s and that didn't happen quite the way we we anticipated and and uh to sanjeev's point it's because it is really complex and really difficult to do my hope is that you know we won't sort of uh how do we put this fade out into this nebulous nebula of uh domain catalogs that are specific to individual use cases like purview for getting data quality right or like data governance and cyber security and instead we have some tooling that can actually be adaptive to gather metadata to create something i know is important to you sanjeev and that is this idea of observability if you can get enough metadata without moving your data around but understanding the entirety of a system that's running on this data you can do a lot to help with with the governance that doug is talking about so so i just want to add that you know data governance like many other initiatives did not succeed even ai went into an ai window but that's a different topic but a lot of these things did not succeed because to your point the incentives were not there i i remember when starbucks oxley had come into the scene if if a bank did not do service obviously they were very happy to a million dollar fine that was like you know pocket change for them instead of doing the right thing but i think the stakes are much higher now with gdpr uh the floodgates open now you know california you know has ccpa but even ccpa is being outdated with cpra which is much more gdpr like so we are very rapidly entering a space where every pretty much every major country in the world is coming up with its own uh compliance regulatory requirements data residence is becoming really important and and i i think we are going to reach a stage where uh it won't be optional anymore so whether we like it or not and i think the reason data catalogs were not successful in the past is because we did not have the right focus on adoption we were focused on features and these features were disconnected very hard for business to stop these are built by it people for it departments to to take a look at technical metadata not business metadata today the tables have turned cdo's are driving this uh initiative uh regulatory compliances are beating down hard so i think the time might be right yeah so guys we have to move on here and uh but there's some some real meat on the bone here sanjeev i like the fact that you late you called out calibra and alation so we can look back a year from now and say okay he made the call he stuck it and then the ratio of bi tools the data catalogs that's another sort of measurement that we can we can take even though some skepticism there that's something that we can watch and i wonder if someday if we'll have more metadata than data but i want to move to tony baer you want to talk about data mesh and speaking you know coming off of governance i mean wow you know the whole concept of data mesh is decentralized data and then governance becomes you know a nightmare there but take it away tony we'll put it this way um data mesh you know the the idea at least is proposed by thoughtworks um you know basically was unleashed a couple years ago and the press has been almost uniformly almost uncritical um a good reason for that is for all the problems that basically that sanjeev and doug and brad were just you know we're just speaking about which is that we have all this data out there and we don't know what to do about it um now that's not a new problem that was a problem we had enterprise data warehouses it was a problem when we had our hadoop data clusters it's even more of a problem now the data's out in the cloud where the data is not only your data like is not only s3 it's all over the place and it's also including streaming which i know we'll be talking about later so the data mesh was a response to that the idea of that we need to debate you know who are the folks that really know best about governance is the domain experts so it was basically data mesh was an architectural pattern and a process my prediction for this year is that data mesh is going to hit cold hard reality because if you if you do a google search um basically the the published work the articles and databases have been largely you know pretty uncritical um so far you know that you know basically learning is basically being a very revolutionary new idea i don't think it's that revolutionary because we've talked about ideas like this brad and i you and i met years ago when we were talking about so and decentralizing all of us was at the application level now we're talking about at the data level and now we have microservices so there's this thought of oh if we manage if we're apps in cloud native through microservices why don't we think of data in the same way um my sense this year is that you know this and this has been a very active search if you look at google search trends is that now companies are going to you know enterprises are going to look at this seriously and as they look at seriously it's going to attract its first real hard scrutiny it's going to attract its first backlash that's not necessarily a bad thing it means that it's being taken seriously um the reason why i think that that uh that it will you'll start to see basically the cold hard light of day shine on data mesh is that it's still a work in progress you know this idea is basically a couple years old and there's still some pretty major gaps um the biggest gap is in is in the area of federated governance now federated governance itself is not a new issue uh federated governance position we're trying to figure out like how can we basically strike the balance between getting let's say you know between basically consistent enterprise policy consistent enterprise governance but yet the groups that understand the data know how to basically you know that you know how do we basically sort of balance the two there's a huge there's a huge gap there in practice and knowledge um also to a lesser extent there's a technology gap which is basically in the self-service technologies that will help teams essentially govern data you know basically through the full life cycle from developed from selecting the data from you know building the other pipelines from determining your access control determining looking at quality looking at basically whether data is fresh or whether or not it's trending of course so my predictions is that it will really receive the first harsh scrutiny this year you are going to see some organization enterprises declare premature victory when they've uh when they build some federated query implementations you're going to see vendors start to data mesh wash their products anybody in the data management space they're going to say that whether it's basically a pipelining tool whether it's basically elt whether it's a catalog um or confederated query tool they're all going to be like you know basically promoting the fact of how they support this hopefully nobody is going to call themselves a data mesh tool because data mesh is not a technology we're going to see one other thing come out of this and this harks back to the metadata that sanji was talking about and the catalogs that he was talking about which is that there's going to be a new focus on every renewed focus on metadata and i think that's going to spur interest in data fabrics now data fabrics are pretty vaguely defined but if we just take the most elemental definition which is a common metadata back plane i think that if anybody is going to get serious about data mesh they need to look at a data fabric because we all at the end of the day need to speak you know need to read from the same sheet of music so thank you tony dave dave meninger i mean one of the things that people like about data mesh is it pretty crisply articulates some of the flaws in today's organizational approaches to data what are your thoughts on this well i think we have to start by defining data mesh right the the term is already getting corrupted right tony said it's going to see the cold hard uh light of day and there's a problem right now that there are a number of overlapping terms that are similar but not identical so we've got data virtualization data fabric excuse me for a second sorry about that data virtualization data fabric uh uh data federation right uh so i i think that it's not really clear what each vendor means by these terms i see data mesh and data fabric becoming quite popular i've i've interpreted data mesh as referring primarily to the governance aspects as originally you know intended and specified but that's not the way i see vendors using i see vendors using it much more to mean data fabric and data virtualization so i'm going to comment on the group of those things i think the group of those things is going to happen they're going to happen they're going to become more robust our research suggests that a quarter of organizations are already using virtualized access to their data lakes and another half so a total of three quarters will eventually be accessing their data lakes using some sort of virtualized access again whether you define it as mesh or fabric or virtualization isn't really the point here but this notion that there are different elements of data metadata and governance within an organization that all need to be managed collectively the interesting thing is when you look at the satisfaction rates of those organizations using virtualization versus those that are not it's almost double 68 of organizations i'm i'm sorry um 79 of organizations that were using virtualized access express satisfaction with their access to the data lake only 39 expressed satisfaction if they weren't using virtualized access so thank you uh dave uh sanjeev we just got about a couple minutes on this topic but i know you're speaking or maybe you've spoken already on a panel with jamal dagani who sort of invented the concept governance obviously is a big sticking point but what are your thoughts on this you are mute so my message to your mark and uh and to the community is uh as opposed to what dave said let's not define it we spent the whole year defining it there are four principles domain product data infrastructure and governance let's take it to the next level i get a lot of questions on what is the difference between data fabric and data mesh and i'm like i can compare the two because data mesh is a business concept data fabric is a data integration pattern how do you define how do you compare the two you have to bring data mesh level down so to tony's point i'm on a warp path in 2022 to take it down to what does a data product look like how do we handle shared data across domains and govern it and i think we are going to see more of that in 2022 is operationalization of data mesh i think we could have a whole hour on this topic couldn't we uh maybe we should do that uh but let's go to let's move to carl said carl your database guy you've been around that that block for a while now you want to talk about graph databases bring it on oh yeah okay thanks so i regard graph database as basically the next truly revolutionary database management technology i'm looking forward to for the graph database market which of course we haven't defined yet so obviously i have a little wiggle room in what i'm about to say but that this market will grow by about 600 percent over the next 10 years now 10 years is a long time but over the next five years we expect to see gradual growth as people start to learn how to use it problem isn't that it's used the problem is not that it's not useful is that people don't know how to use it so let me explain before i go any further what a graph database is because some of the folks on the call may not may not know what it is a graph database organizes data according to a mathematical structure called a graph a graph has elements called nodes and edges so a data element drops into a node the nodes are connected by edges the edges connect one node to another node combinations of edges create structures that you can analyze to determine how things are related in some cases the nodes and edges can have properties attached to them which add additional informative material that makes it richer that's called a property graph okay there are two principal use cases for graph databases there's there's semantic proper graphs which are used to break down human language text uh into the semantic structures then you can search it organize it and and and answer complicated questions a lot of ai is aimed at semantic graphs another kind is the property graph that i just mentioned which has a dazzling number of use cases i want to just point out is as i talk about this people are probably wondering well we have relational databases isn't that good enough okay so a relational database defines it uses um it supports what i call definitional relationships that means you define the relationships in a fixed structure the database drops into that structure there's a value foreign key value that relates one table to another and that value is fixed you don't change it if you change it the database becomes unstable it's not clear what you're looking at in a graph database the system is designed to handle change so that it can reflect the true state of the things that it's being used to track so um let me just give you some examples of use cases for this um they include uh entity resolution data lineage uh um social media analysis customer 360 fraud prevention there's cyber security there's strong supply chain is a big one actually there's explainable ai and this is going to become important too because a lot of people are adopting ai but they want a system after the fact to say how did the ai system come to that conclusion how did it make that recommendation right now we don't have really good ways of tracking that okay machine machine learning in general um social network i already mentioned that and then we've got oh gosh we've got data governance data compliance risk management we've got recommendation we've got personalization anti-money money laundering that's another big one identity and access management network and i.t operations is already becoming a key one where you actually have mapped out your operation your your you know whatever it is your data center and you you can track what's going on as things happen there root cause analysis fraud detection is a huge one a number of major credit card companies use graph databases for fraud detection risk analysis tracking and tracing churn analysis next best action what-if analysis impact analysis entity resolution and i would add one other thing or just a few other things to this list metadata management so sanjay here you go this is your engine okay because i was in metadata management for quite a while in my past life and one of the things i found was that none of the data management technologies that were available to us could efficiently handle metadata because of the kinds of structures that result from it but grass can okay grafts can do things like say this term in this context means this but in that context it means that okay things like that and in fact uh logistics management supply chain it also because it handles recursive relationships by recursive relationships i mean objects that own other objects that are of the same type you can do things like bill materials you know so like parts explosion you can do an hr analysis who reports to whom how many levels up the chain and that kind of thing you can do that with relational databases but yes it takes a lot of programming in fact you can do almost any of these things with relational databases but the problem is you have to program it it's not it's not supported in the database and whenever you have to program something that means you can't trace it you can't define it you can't publish it in terms of its functionality and it's really really hard to maintain over time so carl thank you i wonder if we could bring brad in i mean brad i'm sitting there wondering okay is this incremental to the market is it disruptive and replaceable what are your thoughts on this space it's already disrupted the market i mean like carl said go to any bank and ask them are you using graph databases to do to get fraud detection under control and they'll say absolutely that's the only way to solve this problem and it is frankly um and it's the only way to solve a lot of the problems that carl mentioned and that is i think it's it's achilles heel in some ways because you know it's like finding the best way to cross the seven bridges of konigsberg you know it's always going to kind of be tied to those use cases because it's really special and it's really unique and because it's special and it's unique uh it it still unfortunately kind of stands apart from the rest of the community that's building let's say ai outcomes as the great great example here the graph databases and ai as carl mentioned are like chocolate and peanut butter but technologically they don't know how to talk to one another they're completely different um and you know it's you can't just stand up sql and query them you've got to to learn um yeah what is that carlos specter or uh special uh uh yeah thank you uh to actually get to the data in there and if you're gonna scale that data that graph database especially a property graph if you're gonna do something really complex like try to understand uh you know all of the metadata in your organization you might just end up with you know a graph database winter like we had the ai winter simply because you run out of performance to make the thing happen so i i think it's already disrupted but we we need to like treat it like a first-class citizen in in the data analytics and ai community we need to bring it into the fold we need to equip it with the tools it needs to do that the magic it does and to do it not just for specialized use cases but for everything because i i'm with carl i i think it's absolutely revolutionary so i had also identified the principal achilles heel of the technology which is scaling now when these when these things get large and complex enough that they spill over what a single server can handle you start to have difficulties because the relationships span things that have to be resolved over a network and then you get network latency and that slows the system down so that's still a problem to be solved sanjeev any quick thoughts on this i mean i think metadata on the on the on the word cloud is going to be the the largest font uh but what are your thoughts here i want to like step away so people don't you know associate me with only meta data so i want to talk about something a little bit slightly different uh dbengines.com has done an amazing job i think almost everyone knows that they chronicle all the major databases that are in use today in january of 2022 there are 381 databases on its list of ranked list of databases the largest category is rdbms the second largest category is actually divided into two property graphs and rdf graphs these two together make up the second largest number of data databases so talking about accolades here this is a problem the problem is that there's so many graph databases to choose from they come in different shapes and forms uh to bright's point there's so many query languages in rdbms is sql end of the story here we've got sci-fi we've got gremlin we've got gql and then your proprietary languages so i think there's a lot of disparity in this space but excellent all excellent points sanji i must say and that is a problem the languages need to be sorted and standardized and it needs people need to have a road map as to what they can do with it because as you say you can do so many things and so many of those things are unrelated that you sort of say well what do we use this for i'm reminded of the saying i learned a bunch of years ago when somebody said that the digital computer is the only tool man has ever devised that has no particular purpose all right guys we gotta we gotta move on to dave uh meninger uh we've heard about streaming uh your prediction is in that realm so please take it away sure so i like to say that historical databases are to become a thing of the past but i don't mean that they're going to go away that's not my point i mean we need historical databases but streaming data is going to become the default way in which we operate with data so in the next say three to five years i would expect the data platforms and and we're using the term data platforms to represent the evolution of databases and data lakes that the data platforms will incorporate these streaming capabilities we're going to process data as it streams into an organization and then it's going to roll off into historical databases so historical databases don't go away but they become a thing of the past they store the data that occurred previously and as data is occurring we're going to be processing it we're going to be analyzing we're going to be acting on it i mean we we only ever ended up with historical databases because we were limited by the technology that was available to us data doesn't occur in batches but we processed it in batches because that was the best we could do and it wasn't bad and we've continued to improve and we've improved and we've improved but streaming data today is still the exception it's not the rule right there's there are projects within organizations that deal with streaming data but it's not the default way in which we deal with data yet and so that that's my prediction is that this is going to change we're going to have um streaming data be the default way in which we deal with data and and how you label it what you call it you know maybe these databases and data platforms just evolve to be able to handle it but we're going to deal with data in a different way and our research shows that already about half of the participants in our analytics and data benchmark research are using streaming data you know another third are planning to use streaming technologies so that gets us to about eight out of ten organizations need to use this technology that doesn't mean they have to use it throughout the whole organization but but it's pretty widespread in its use today and has continued to grow if you think about the consumerization of i.t we've all been conditioned to expect immediate access to information immediate responsiveness you know we want to know if an uh item is on the shelf at our local retail store and we can go in and pick it up right now you know that's the world we live in and that's spilling over into the enterprise i.t world where we have to provide those same types of capabilities um so that's my prediction historical database has become a thing of the past streaming data becomes the default way in which we we operate with data all right thank you david well so what what say you uh carl a guy who's followed historical databases for a long time well one thing actually every database is historical because as soon as you put data in it it's now history it's no longer it no longer reflects the present state of things but even if that history is only a millisecond old it's still history but um i would say i mean i know you're trying to be a little bit provocative in saying this dave because you know as well as i do that people still need to do their taxes they still need to do accounting they still need to run general ledger programs and things like that that all involves historical data that's not going to go away unless you want to go to jail so you're going to have to deal with that but as far as the leading edge functionality i'm totally with you on that and i'm just you know i'm just kind of wondering um if this chain if this requires a change in the way that we perceive applications in order to truly be manifested and rethinking the way m applications work um saying that uh an application should respond instantly as soon as the state of things changes what do you say about that i i think that's true i think we do have to think about things differently that's you know it's not the way we design systems in the past uh we're seeing more and more systems designed that way but again it's not the default and and agree 100 with you that we do need historical databases you know that that's clear and even some of those historical databases will be used in conjunction with the streaming data right so absolutely i mean you know let's take the data warehouse example where you're using the data warehouse as context and the streaming data as the present you're saying here's a sequence of things that's happening right now have we seen that sequence before and where what what does that pattern look like in past situations and can we learn from that so tony bear i wonder if you could comment i mean if you when you think about you know real-time inferencing at the edge for instance which is something that a lot of people talk about um a lot of what we're discussing here in this segment looks like it's got great potential what are your thoughts yeah well i mean i think you nailed it right you know you hit it right on the head there which is that i think a key what i'm seeing is that essentially and basically i'm going to split this one down the middle is i don't see that basically streaming is the default what i see is streaming and basically and transaction databases um and analytics data you know data warehouses data lakes whatever are converging and what allows us technically to converge is cloud native architecture where you can basically distribute things so you could have you can have a note here that's doing the real-time processing that's also doing it and this is what your leads in we're maybe doing some of that real-time predictive analytics to take a look at well look we're looking at this customer journey what's happening with you know you know with with what the customer is doing right now and this is correlated with what other customers are doing so what i so the thing is that in the cloud you can basically partition this and because of basically you know the speed of the infrastructure um that you can basically bring these together and or and so and kind of orchestrate them sort of loosely coupled manner the other part is that the use cases are demanding and this is part that goes back to what dave is saying is that you know when you look at customer 360 when you look at let's say smart you know smart utility grids when you look at any type of operational problem it has a real-time component and it has a historical component and having predictives and so like you know you know my sense here is that there that technically we can bring this together through the cloud and i think the use case is that is that we we can apply some some real-time sort of you know predictive analytics on these streams and feed this into the transactions so that when we make a decision in terms of what to do as a result of a transaction we have this real time you know input sanjeev did you have a comment yeah i was just going to say that to this point you know we have to think of streaming very different because in the historical databases we used to bring the data and store the data and then we used to run rules on top uh aggregations and all but in case of streaming the mindset changes because the rules normally the inference all of that is fixed but the data is constantly changing so it's a completely reverse way of thinking of uh and building applications on top of that so dave menninger there seemed to be some disagreement about the default or now what kind of time frame are you are you thinking about is this end of decade it becomes the default what would you pin i i think around you know between between five to ten years i think this becomes the reality um i think you know it'll be more and more common between now and then but it becomes the default and i also want sanjeev at some point maybe in one of our subsequent conversations we need to talk about governing streaming data because that's a whole other set of challenges we've also talked about it rather in a two dimensions historical and streaming and there's lots of low latency micro batch sub second that's not quite streaming but in many cases it's fast enough and we're seeing a lot of adoption of near real time not quite real time as uh good enough for most for many applications because nobody's really taking the hardware dimension of this information like how do we that'll just happen carl so near real time maybe before you lose the customer however you define that right okay um let's move on to brad brad you want to talk about automation ai uh the the the pipeline people feel like hey we can just automate everything what's your prediction yeah uh i'm i'm an ai fiction auto so apologies in advance for that but uh you know um i i think that um we've been seeing automation at play within ai for some time now and it's helped us do do a lot of things for especially for practitioners that are building ai outcomes in the enterprise uh it's it's helped them to fill skills gaps it's helped them to speed development and it's helped them to to actually make ai better uh because it you know in some ways provides some swim lanes and and for example with technologies like ottawa milk and can auto document and create that sort of transparency that that we talked about a little bit earlier um but i i think it's there's an interesting kind of conversion happening with this idea of automation um and and that is that uh we've had the automation that started happening for practitioners it's it's trying to move outside of the traditional bounds of things like i'm just trying to get my features i'm just trying to pick the right algorithm i'm just trying to build the right model uh and it's expanding across that full life cycle of building an ai outcome to start at the very beginning of data and to then continue on to the end which is this continuous delivery and continuous uh automation of of that outcome to make sure it's right and it hasn't drifted and stuff like that and because of that because it's become kind of powerful we're starting to to actually see this weird thing happen where the practitioners are starting to converge with the users and that is to say that okay if i'm in tableau right now i can stand up salesforce einstein discovery and it will automatically create a nice predictive algorithm for me um given the data that i that i pull in um but what's starting to happen and we're seeing this from the the the companies that create business software so salesforce oracle sap and others is that they're starting to actually use these same ideals and a lot of deep learning to to basically stand up these out of the box flip a switch and you've got an ai outcome at the ready for business users and um i i'm very much you know i think that that's that's the way that it's going to go and what it means is that ai is is slowly disappearing uh and i don't think that's a bad thing i think if anything what we're going to see in 2022 and maybe into 2023 is this sort of rush to to put this idea of disappearing ai into practice and have as many of these solutions in the enterprise as possible you can see like for example sap is going to roll out this quarter this thing called adaptive recommendation services which which basically is a cold start ai outcome that can work across a whole bunch of different vertical markets and use cases it's just a recommendation engine for whatever you need it to do in the line of business so basically you're you're an sap user you look up to turn on your software one day and you're a sales professional let's say and suddenly you have a recommendation for customer churn it's going that's great well i i don't know i i think that's terrifying in some ways i think it is the future that ai is going to disappear like that but i am absolutely terrified of it because um i i think that what it what it really does is it calls attention to a lot of the issues that we already see around ai um specific to this idea of what what we like to call it omdia responsible ai which is you know how do you build an ai outcome that is free of bias that is inclusive that is fair that is safe that is secure that it's audible etc etc etc etc that takes some a lot of work to do and so if you imagine a customer that that's just a sales force customer let's say and they're turning on einstein discovery within their sales software you need some guidance to make sure that when you flip that switch that the outcome you're going to get is correct and that's that's going to take some work and so i think we're going to see this let's roll this out and suddenly there's going to be a lot of a lot of problems a lot of pushback uh that we're going to see and some of that's going to come from gdpr and others that sam jeeve was mentioning earlier a lot of it's going to come from internal csr requirements within companies that are saying hey hey whoa hold up we can't do this all at once let's take the slow route let's make ai automated in a smart way and that's going to take time yeah so a couple predictions there that i heard i mean ai essentially you disappear it becomes invisible maybe if i can restate that and then if if i understand it correctly brad you're saying there's a backlash in the near term people can say oh slow down let's automate what we can those attributes that you talked about are non trivial to achieve is that why you're a bit of a skeptic yeah i think that we don't have any sort of standards that companies can look to and understand and we certainly within these companies especially those that haven't already stood up in internal data science team they don't have the knowledge to understand what that when they flip that switch for an automated ai outcome that it's it's gonna do what they think it's gonna do and so we need some sort of standard standard methodology and practice best practices that every company that's going to consume this invisible ai can make use of and one of the things that you know is sort of started that google kicked off a few years back that's picking up some momentum and the companies i just mentioned are starting to use it is this idea of model cards where at least you have some transparency about what these things are doing you know so like for the sap example we know for example that it's convolutional neural network with a long short-term memory model that it's using we know that it only works on roman english uh and therefore me as a consumer can say oh well i know that i need to do this internationally so i should not just turn this on today great thank you carl can you add anything any context here yeah we've talked about some of the things brad mentioned here at idc in the our future of intelligence group regarding in particular the moral and legal implications of having a fully automated you know ai uh driven system uh because we already know and we've seen that ai systems are biased by the data that they get right so if if they get data that pushes them in a certain direction i think there was a story last week about an hr system that was uh that was recommending promotions for white people over black people because in the past um you know white people were promoted and and more productive than black people but not it had no context as to why which is you know because they were being historically discriminated black people being historically discriminated against but the system doesn't know that so you know you have to be aware of that and i think that at the very least there should be controls when a decision has either a moral or a legal implication when when you want when you really need a human judgment it could lay out the options for you but a person actually needs to authorize that that action and i also think that we always will have to be vigilant regarding the kind of data we use to train our systems to make sure that it doesn't introduce unintended biases and to some extent they always will so we'll always be chasing after them that's that's absolutely carl yeah i think that what you have to bear in mind as a as a consumer of ai is that it is a reflection of us and we are a very flawed species uh and so if you look at all the really fantastic magical looking supermodels we see like gpt three and four that's coming out z they're xenophobic and hateful uh because the people the data that's built upon them and the algorithms and the people that build them are us so ai is a reflection of us we need to keep that in mind yeah we're the ai's by us because humans are biased all right great okay let's move on doug henson you know a lot of people that said that data lake that term's not not going to not going to live on but it appears to be have some legs here uh you want to talk about lake house bring it on yes i do my prediction is that lake house and this idea of a combined data warehouse and data lake platform is going to emerge as the dominant data management offering i say offering that doesn't mean it's going to be the dominant thing that organizations have out there but it's going to be the predominant vendor offering in 2022. now heading into 2021 we already had cloudera data bricks microsoft snowflake as proponents in 2021 sap oracle and several of these fabric virtualization mesh vendors join the bandwagon the promise is that you have one platform that manages your structured unstructured and semi-structured information and it addresses both the beyond analytics needs and the data science needs the real promise there is simplicity and lower cost but i think end users have to answer a few questions the first is does your organization really have a center of data gravity or is it is the data highly distributed multiple data warehouses multiple data lakes on-premises cloud if it if it's very distributed and you you know you have difficulty consolidating and that's not really a goal for you then maybe that single platform is unrealistic and not likely to add value to you um you know also the fabric and virtualization vendors the the mesh idea that's where if you have this highly distributed situation that might be a better path forward the second question if you are looking at one of these lake house offerings you are looking at consolidating simplifying bringing together to a single platform you have to make sure that it meets both the warehouse need and the data lake need so you have vendors like data bricks microsoft with azure synapse new really to the data warehouse space and they're having to prove that these data warehouse capabilities on their platforms can meet the scaling requirements can meet the user and query concurrency requirements meet those tight slas and then on the other hand you have the or the oracle sap snowflake the data warehouse uh folks coming into the data science world and they have to prove that they can manage the unstructured information and meet the needs of the data scientists i'm seeing a lot of the lake house offerings from the warehouse crowd managing that unstructured information in columns and rows and some of these vendors snowflake in particular is really relying on partners for the data science needs so you really got to look at a lake house offering and make sure that it meets both the warehouse and the data lake requirement well thank you doug well tony if those two worlds are going to come together as doug was saying the analytics and the data science world does it need to be some kind of semantic layer in between i don't know weigh in on this topic if you would oh didn't we talk about data fabrics before common metadata layer um actually i'm almost tempted to say let's declare victory and go home in that this is actually been going on for a while i actually agree with uh you know much what doug is saying there which is that i mean we i remembered as far back as i think it was like 2014 i was doing a a study you know it was still at ovum predecessor omnia um looking at all these specialized databases that were coming up and seeing that you know there's overlap with the edges but yet there was still going to be a reason at the time that you would have let's say a document database for json you'd have a relational database for tran you know for transactions and for data warehouse and you had you know and you had basically something at that time that that resembles to do for what we're considering a day of life fast fo and the thing is what i was saying at the time is that you're seeing basically blur you know sort of blending at the edges that i was saying like about five or six years ago um that's all and the the lake house is essentially you know the amount of the the current manifestation of that idea there is a dichotomy in terms of you know it's the old argument do we centralize this all you know you know in in in in in a single place or do we or do we virtualize and i think it's always going to be a yin and yang there's never going to be a single single silver silver bullet i do see um that they're also going to be questions and these are things that points that doug raised they're you know what your what do you need of of of your of you know for your performance there or for your you know pre-performance characteristics do you need for instance hiking currency you need the ability to do some very sophisticated joins or is your requirement more to be able to distribute and you know distribute our processing is you know as far as possible to get you know to essentially do a kind of brute force approach all these approaches are valid based on you know based on the used case um i just see that essentially that the lake house is the culmination of it's nothing it's just it's a relatively new term introduced by databricks a couple years ago this is the culmination of basically what's been a long time trend and what we see in the cloud is that as we start seeing data warehouses as a checkbox item say hey we can basically source data in cloud and cloud storage and s3 azure blob store you know whatever um as long as it's in certain formats like you know like you know parquet or csv or something like that you know i see that as becoming kind of you know a check box item so to that extent i think that the lake house depending on how you define it is already reality um and in some in some cases maybe new terminology but not a whole heck of a lot new under the sun yeah and dave menger i mean a lot of this thank you tony but a lot of this is going to come down to you know vendor marketing right some people try to co-opt the term we talked about data mesh washing what are your thoughts on this yeah so um i used the term data platform earlier and and part of the reason i use that term is that it's more vendor neutral uh we've we've tried to uh sort of stay out of the the vendor uh terminology patenting world right whether whether the term lake house is what sticks or not the concept is certainly going to stick and we have some data to back it up about a quarter of organizations that are using data lakes today already incorporate data warehouse functionality into it so they consider their data lake house and data warehouse one in the same about a quarter of organizations a little less but about a quarter of organizations feed the data lake from the data warehouse and about a quarter of organizations feed the data warehouse from the data lake so it's pretty obvious that three quarters of organizations need to bring this stuff together right the need is there the need is apparent the technology is going to continue to verge converge i i like to talk about you know you've got data lakes over here at one end and i'm not going to talk about why people thought data lakes were a bad idea because they thought you just throw stuff in a in a server and you ignore it right that's not what a data lake is so you've got data lake people over here and you've got database people over here data warehouse people over here database vendors are adding data lake capabilities and data lake vendors are adding data warehouse capabilities so it's obvious that they're going to meet in the middle i mean i think it's like tony says i think we should there declare victory and go home and so so i it's just a follow-up on that so are you saying these the specialized lake and the specialized warehouse do they go away i mean johnny tony data mesh practitioners would say or or advocates would say well they could all live as just a node on the on the mesh but based on what dave just said are we going to see those all morph together well number one as i was saying before there's always going to be this sort of you know kind of you know centrifugal force or this tug of war between do we centralize the data do we do it virtualize and the fact is i don't think that work there's ever going to be any single answer i think in terms of data mesh data mesh has nothing to do with how you physically implement the data you could have a data mesh on a basically uh on a data warehouse it's just that you know the difference being is that if we use the same you know physical data store but everybody's logically manual basically governing it differently you know um a data mission is basically it's not a technology it's a process it's a governance process um so essentially um you know you know i basically see that you know as as i was saying before that this is basically the culmination of a long time trend we're essentially seeing a lot of blurring but there are going to be cases where for instance if i need let's say like observe i need like high concurrency or something like that there are certain things that i'm not going to be able to get efficiently get out of a data lake um and you know we're basically i'm doing a system where i'm just doing really brute forcing very fast file scanning and that type of thing so i think there always will be some delineations but i would agree with dave and with doug that we are seeing basically a a confluence of requirements that we need to essentially have basically the element you know the ability of a data lake and a data laid out their warehouse we these need to come together so i think what we're likely to see is organizations look for a converged platform that can handle both sides for their center of data gravity the mesh and the fabric vendors the the fabric virtualization vendors they're all on board with the idea of this converged platform and they're saying hey we'll handle all the edge cases of the stuff that isn't in that center of data gradient that is off distributed in a cloud or at a remote location so you can have that single platform for the center of of your your data and then bring in virtualization mesh what have you for reaching out to the distributed data bingo as they basically said people are happy when they virtualize data i i think yes at this point but to this uh dave meningas point you know they have convert they are converging snowflake has introduced support for unstructured data so now we are literally splitting here now what uh databricks is saying is that aha but it's easy to go from data lake to data warehouse than it is from data warehouse to data lake so i think we're getting into semantics but we've already seen these two converge so is that so it takes something like aws who's got what 15 data stores are they're going to have 15 converged data stores that's going to be interesting to watch all right guys i'm going to go down the list and do like a one i'm going to one word each and you guys each of the analysts if you wouldn't just add a very brief sort of course correction for me so sanjeev i mean governance is going to be the maybe it's the dog that wags the tail now i mean it's coming to the fore all this ransomware stuff which really didn't talk much about security but but but what's the one word in your prediction that you would leave us with on governance it's uh it's going to be mainstream mainstream okay tony bear mesh washing is what i wrote down that's that's what we're going to see in uh in in 2022 a little reality check you you want to add to that reality check is i hope that no vendor you know jumps the shark and calls their offering a data mesh project yeah yeah let's hope that doesn't happen if they do we're going to call them out uh carl i mean graph databases thank you for sharing some some you know high growth metrics i know it's early days but magic is what i took away from that it's the magic database yeah i would actually i've said this to people too i i kind of look at it as a swiss army knife of data because you can pretty much do anything you want with it it doesn't mean you should i mean that's definitely the case that if you're you know managing things that are in a fixed schematic relationship probably a relational database is a better choice there are you know times when the document database is a better choice it can handle those things but maybe not it may not be the best choice for that use case but for a great many especially the new emerging use cases i listed it's the best choice thank you and dave meninger thank you by the way for bringing the data in i like how you supported all your comments with with some some data points but streaming data becomes the sort of default uh paradigm if you will what would you add yeah um i would say think fast right that's the world we live in you got to think fast fast love it uh and brad shimon uh i love it i mean on the one hand i was saying okay great i'm afraid i might get disrupted by one of these internet giants who are ai experts so i'm gonna be able to buy instead of build ai but then again you know i've got some real issues there's a potential backlash there so give us the there's your bumper sticker yeah i i would say um going with dave think fast and also think slow uh to to talk about the book that everyone talks about i would say really that this is all about trust trust in the idea of automation and of a transparent invisible ai across the enterprise but verify verify before you do anything and then doug henson i mean i i look i think the the trend is your friend here on this prediction with lake house is uh really becoming dominant i liked the way you set up that notion of you know the the the data warehouse folks coming at it from the analytics perspective but then you got the data science worlds coming together i still feel as though there's this piece in the middle that we're missing but your your final thoughts we'll give you the last well i think the idea of consolidation and simplification uh always prevails that's why the appeal of a single platform is going to be there um we've already seen that with uh you know hadoop platforms moving toward cloud moving toward object storage and object storage becoming really the common storage point for whether it's a lake or a warehouse uh and that second point uh i think esg mandates are uh are gonna come in alongside uh gdpr and things like that to uh up the ante for uh good governance yeah thank you for calling that out okay folks hey that's all the time that that we have here your your experience and depth of understanding on these key issues and in data and data management really on point and they were on display today i want to thank you for your your contributions really appreciate your time enjoyed it thank you now in addition to this video we're going to be making available transcripts of the discussion we're going to do clips of this as well we're going to put them out on social media i'll write this up and publish the discussion on wikibon.com and siliconangle.com no doubt several of the analysts on the panel will take the opportunity to publish written content social commentary or both i want to thank the power panelist and thanks for watching this special cube presentation this is dave vellante be well and we'll see you next time [Music] you
SUMMARY :
the end of the day need to speak you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
381 databases | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
2022 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
january of 2022 | DATE | 0.99+ |
100 users | QUANTITY | 0.99+ |
jamal dagani | PERSON | 0.99+ |
last week | DATE | 0.99+ |
dave meninger | PERSON | 0.99+ |
sanji | PERSON | 0.99+ |
second question | QUANTITY | 0.99+ |
15 converged data stores | QUANTITY | 0.99+ |
dave vellante | PERSON | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
sanjeev | PERSON | 0.99+ |
2023 | DATE | 0.99+ |
15 data stores | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
last year | DATE | 0.99+ |
sanjeev mohan | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
carl | PERSON | 0.99+ |
tony | PERSON | 0.99+ |
carl olufsen | PERSON | 0.99+ |
six years | QUANTITY | 0.99+ |
david | PERSON | 0.99+ |
carlos specter | PERSON | 0.98+ |
both sides | QUANTITY | 0.98+ |
2010s | DATE | 0.98+ |
first backlash | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
dave | PERSON | 0.98+ |
each | QUANTITY | 0.98+ |
three quarters | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
single platform | QUANTITY | 0.98+ |
lake house | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
doug | PERSON | 0.97+ |
one word | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
wikibon.com | OTHER | 0.97+ |
one platform | QUANTITY | 0.97+ |
39 | QUANTITY | 0.97+ |
about 600 percent | QUANTITY | 0.97+ |
two analysts | QUANTITY | 0.97+ |
ten years | QUANTITY | 0.97+ |
single platform | QUANTITY | 0.96+ |
five | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
three quarters | QUANTITY | 0.96+ |
california | LOCATION | 0.96+ |
ORGANIZATION | 0.96+ | |
single | QUANTITY | 0.95+ |
Predictions 2022: Top Analysts See the Future of Data
(bright music) >> In the 2010s, organizations became keenly aware that data would become the key ingredient to driving competitive advantage, differentiation, and growth. But to this day, putting data to work remains a difficult challenge for many, if not most organizations. Now, as the cloud matures, it has become a game changer for data practitioners by making cheap storage and massive processing power readily accessible. We've also seen better tooling in the form of data workflows, streaming, machine intelligence, AI, developer tools, security, observability, automation, new databases and the like. These innovations they accelerate data proficiency, but at the same time, they add complexity for practitioners. Data lakes, data hubs, data warehouses, data marts, data fabrics, data meshes, data catalogs, data oceans are forming, they're evolving and exploding onto the scene. So in an effort to bring perspective to the sea of optionality, we've brought together the brightest minds in the data analyst community to discuss how data management is morphing and what practitioners should expect in 2022 and beyond. Hello everyone, my name is Dave Velannte with theCUBE, and I'd like to welcome you to a special Cube presentation, analysts predictions 2022: the future of data management. We've gathered six of the best analysts in data and data management who are going to present and discuss their top predictions and trends for 2022 in the first half of this decade. Let me introduce our six power panelists. Sanjeev Mohan is former Gartner Analyst and Principal at SanjMo. Tony Baer, principal at dbInsight, Carl Olofson is well-known Research Vice President with IDC, Dave Menninger is Senior Vice President and Research Director at Ventana Research, Brad Shimmin, Chief Analyst, AI Platforms, Analytics and Data Management at Omdia and Doug Henschen, Vice President and Principal Analyst at Constellation Research. Gentlemen, welcome to the program and thanks for coming on theCUBE today. >> Great to be here. >> Thank you. >> All right, here's the format we're going to use. I as moderator, I'm going to call on each analyst separately who then will deliver their prediction or mega trend, and then in the interest of time management and pace, two analysts will have the opportunity to comment. If we have more time, we'll elongate it, but let's get started right away. Sanjeev Mohan, please kick it off. You want to talk about governance, go ahead sir. >> Thank you Dave. I believe that data governance which we've been talking about for many years is now not only going to be mainstream, it's going to be table stakes. And all the things that you mentioned, you know, the data, ocean data lake, lake houses, data fabric, meshes, the common glue is metadata. If we don't understand what data we have and we are governing it, there is no way we can manage it. So we saw Informatica went public last year after a hiatus of six. I'm predicting that this year we see some more companies go public. My bet is on Culebra, most likely and maybe Alation we'll see go public this year. I'm also predicting that the scope of data governance is going to expand beyond just data. It's not just data and reports. We are going to see more transformations like spark jawsxxxxx, Python even Air Flow. We're going to see more of a streaming data. So from Kafka Schema Registry, for example. We will see AI models become part of this whole governance suite. So the governance suite is going to be very comprehensive, very detailed lineage, impact analysis, and then even expand into data quality. We already seen that happen with some of the tools where they are buying these smaller companies and bringing in data quality monitoring and integrating it with metadata management, data catalogs, also data access governance. So what we are going to see is that once the data governance platforms become the key entry point into these modern architectures, I'm predicting that the usage, the number of users of a data catalog is going to exceed that of a BI tool. That will take time and we already seen that trajectory. Right now if you look at BI tools, I would say there a hundred users to BI tool to one data catalog. And I see that evening out over a period of time and at some point data catalogs will really become the main way for us to access data. Data catalog will help us visualize data, but if we want to do more in-depth analysis, it'll be the jumping off point into the BI tool, the data science tool and that is the journey I see for the data governance products. >> Excellent, thank you. Some comments. Maybe Doug, a lot of things to weigh in on there, maybe you can comment. >> Yeah, Sanjeev I think you're spot on, a lot of the trends the one disagreement, I think it's really still far from mainstream. As you say, we've been talking about this for years, it's like God, motherhood, apple pie, everyone agrees it's important, but too few organizations are really practicing good governance because it's hard and because the incentives have been lacking. I think one thing that deserves mention in this context is ESG mandates and guidelines, these are environmental, social and governance, regs and guidelines. We've seen the environmental regs and guidelines and posts in industries, particularly the carbon-intensive industries. We've seen the social mandates, particularly diversity imposed on suppliers by companies that are leading on this topic. We've seen governance guidelines now being imposed by banks on investors. So these ESGs are presenting new carrots and sticks, and it's going to demand more solid data. It's going to demand more detailed reporting and solid reporting, tighter governance. But we're still far from mainstream adoption. We have a lot of, you know, best of breed niche players in the space. I think the signs that it's going to be more mainstream are starting with things like Azure Purview, Google Dataplex, the big cloud platform players seem to be upping the ante and starting to address governance. >> Excellent, thank you Doug. Brad, I wonder if you could chime in as well. >> Yeah, I would love to be a believer in data catalogs. But to Doug's point, I think that it's going to take some more pressure for that to happen. I recall metadata being something every enterprise thought they were going to get under control when we were working on service oriented architecture back in the nineties and that didn't happen quite the way we anticipated. And so to Sanjeev's point it's because it is really complex and really difficult to do. My hope is that, you know, we won't sort of, how do I put this? Fade out into this nebula of domain catalogs that are specific to individual use cases like Purview for getting data quality right or like data governance and cybersecurity. And instead we have some tooling that can actually be adaptive to gather metadata to create something. And I know its important to you, Sanjeev and that is this idea of observability. If you can get enough metadata without moving your data around, but understanding the entirety of a system that's running on this data, you can do a lot. So to help with the governance that Doug is talking about. >> So I just want to add that, data governance, like any other initiatives did not succeed even AI went into an AI window, but that's a different topic. But a lot of these things did not succeed because to your point, the incentives were not there. I remember when Sarbanes Oxley had come into the scene, if a bank did not do Sarbanes Oxley, they were very happy to a million dollar fine. That was like, you know, pocket change for them instead of doing the right thing. But I think the stakes are much higher now. With GDPR, the flood gates opened. Now, you know, California, you know, has CCPA but even CCPA is being outdated with CPRA, which is much more GDPR like. So we are very rapidly entering a space where pretty much every major country in the world is coming up with its own compliance regulatory requirements, data residents is becoming really important. And I think we are going to reach a stage where it won't be optional anymore. So whether we like it or not, and I think the reason data catalogs were not successful in the past is because we did not have the right focus on adoption. We were focused on features and these features were disconnected, very hard for business to adopt. These are built by IT people for IT departments to take a look at technical metadata, not business metadata. Today the tables have turned. CDOs are driving this initiative, regulatory compliances are beating down hard, so I think the time might be right. >> Yeah so guys, we have to move on here. But there's some real meat on the bone here, Sanjeev. I like the fact that you called out Culebra and Alation, so we can look back a year from now and say, okay, he made the call, he stuck it. And then the ratio of BI tools to data catalogs that's another sort of measurement that we can take even though with some skepticism there, that's something that we can watch. And I wonder if someday, if we'll have more metadata than data. But I want to move to Tony Baer, you want to talk about data mesh and speaking, you know, coming off of governance. I mean, wow, you know the whole concept of data mesh is, decentralized data, and then governance becomes, you know, a nightmare there, but take it away, Tony. >> We'll put this way, data mesh, you know, the idea at least as proposed by ThoughtWorks. You know, basically it was at least a couple of years ago and the press has been almost uniformly almost uncritical. A good reason for that is for all the problems that basically Sanjeev and Doug and Brad we're just speaking about, which is that we have all this data out there and we don't know what to do about it. Now, that's not a new problem. That was a problem we had in enterprise data warehouses, it was a problem when we had over DoOP data clusters, it's even more of a problem now that data is out in the cloud where the data is not only your data lake, is not only us three, it's all over the place. And it's also including streaming, which I know we'll be talking about later. So the data mesh was a response to that, the idea of that we need to bait, you know, who are the folks that really know best about governance? It's the domain experts. So it was basically data mesh was an architectural pattern and a process. My prediction for this year is that data mesh is going to hit cold heart reality. Because if you do a Google search, basically the published work, the articles on data mesh have been largely, you know, pretty uncritical so far. Basically loading and is basically being a very revolutionary new idea. I don't think it's that revolutionary because we've talked about ideas like this. Brad now you and I met years ago when we were talking about so and decentralizing all of us, but it was at the application level. Now we're talking about it at the data level. And now we have microservices. So there's this thought of have we managed if we're deconstructing apps in cloud native to microservices, why don't we think of data in the same way? My sense this year is that, you know, this has been a very active search if you look at Google search trends, is that now companies, like enterprise are going to look at this seriously. And as they look at it seriously, it's going to attract its first real hard scrutiny, it's going to attract its first backlash. That's not necessarily a bad thing. It means that it's being taken seriously. The reason why I think that you'll start to see basically the cold hearted light of day shine on data mesh is that it's still a work in progress. You know, this idea is basically a couple of years old and there's still some pretty major gaps. The biggest gap is in the area of federated governance. Now federated governance itself is not a new issue. Federated governance decision, we started figuring out like, how can we basically strike the balance between getting let's say between basically consistent enterprise policy, consistent enterprise governance, but yet the groups that understand the data and know how to basically, you know, that, you know, how do we basically sort of balance the two? There's a huge gap there in practice and knowledge. Also to a lesser extent, there's a technology gap which is basically in the self-service technologies that will help teams essentially govern data. You know, basically through the full life cycle, from develop, from selecting the data from, you know, building the pipelines from, you know, determining your access control, looking at quality, looking at basically whether the data is fresh or whether it's trending off course. So my prediction is that it will receive the first harsh scrutiny this year. You are going to see some organization and enterprises declare premature victory when they build some federated query implementations. You going to see vendors start with data mesh wash their products anybody in the data management space that they are going to say that where this basically a pipelining tool, whether it's basically ELT, whether it's a catalog or federated query tool, they will all going to get like, you know, basically promoting the fact of how they support this. Hopefully nobody's going to call themselves a data mesh tool because data mesh is not a technology. We're going to see one other thing come out of this. And this harks back to the metadata that Sanjeev was talking about and of the catalog just as he was talking about. Which is that there's going to be a new focus, every renewed focus on metadata. And I think that's going to spur interest in data fabrics. Now data fabrics are pretty vaguely defined, but if we just take the most elemental definition, which is a common metadata back plane, I think that if anybody is going to get serious about data mesh, they need to look at the data fabric because we all at the end of the day, need to speak, you know, need to read from the same sheet of music. >> So thank you Tony. Dave Menninger, I mean, one of the things that people like about data mesh is it pretty crisply articulate some of the flaws in today's organizational approaches to data. What are your thoughts on this? >> Well, I think we have to start by defining data mesh, right? The term is already getting corrupted, right? Tony said it's going to see the cold hard light of day. And there's a problem right now that there are a number of overlapping terms that are similar but not identical. So we've got data virtualization, data fabric, excuse me for a second. (clears throat) Sorry about that. Data virtualization, data fabric, data federation, right? So I think that it's not really clear what each vendor means by these terms. I see data mesh and data fabric becoming quite popular. I've interpreted data mesh as referring primarily to the governance aspects as originally intended and specified. But that's not the way I see vendors using it. I see vendors using it much more to mean data fabric and data virtualization. So I'm going to comment on the group of those things. I think the group of those things is going to happen. They're going to happen, they're going to become more robust. Our research suggests that a quarter of organizations are already using virtualized access to their data lakes and another half, so a total of three quarters will eventually be accessing their data lakes using some sort of virtualized access. Again, whether you define it as mesh or fabric or virtualization isn't really the point here. But this notion that there are different elements of data, metadata and governance within an organization that all need to be managed collectively. The interesting thing is when you look at the satisfaction rates of those organizations using virtualization versus those that are not, it's almost double, 68% of organizations, I'm sorry, 79% of organizations that were using virtualized access express satisfaction with their access to the data lake. Only 39% express satisfaction if they weren't using virtualized access. >> Oh thank you Dave. Sanjeev we just got about a couple of minutes on this topic, but I know you're speaking or maybe you've always spoken already on a panel with (indistinct) who sort of invented the concept. Governance obviously is a big sticking point, but what are your thoughts on this? You're on mute. (panelist chuckling) >> So my message to (indistinct) and to the community is as opposed to what they said, let's not define it. We spent a whole year defining it, there are four principles, domain, product, data infrastructure, and governance. Let's take it to the next level. I get a lot of questions on what is the difference between data fabric and data mesh? And I'm like I can't compare the two because data mesh is a business concept, data fabric is a data integration pattern. How do you compare the two? You have to bring data mesh a level down. So to Tony's point, I'm on a warpath in 2022 to take it down to what does a data product look like? How do we handle shared data across domains and governance? And I think we are going to see more of that in 2022, or is "operationalization" of data mesh. >> I think we could have a whole hour on this topic, couldn't we? Maybe we should do that. But let's corner. Let's move to Carl. So Carl, you're a database guy, you've been around that block for a while now, you want to talk about graph databases, bring it on. >> Oh yeah. Okay thanks. So I regard graph database as basically the next truly revolutionary database management technology. I'm looking forward for the graph database market, which of course we haven't defined yet. So obviously I have a little wiggle room in what I'm about to say. But this market will grow by about 600% over the next 10 years. Now, 10 years is a long time. But over the next five years, we expect to see gradual growth as people start to learn how to use it. The problem is not that it's not useful, its that people don't know how to use it. So let me explain before I go any further what a graph database is because some of the folks on the call may not know what it is. A graph database organizes data according to a mathematical structure called a graph. The graph has elements called nodes and edges. So a data element drops into a node, the nodes are connected by edges, the edges connect one node to another node. Combinations of edges create structures that you can analyze to determine how things are related. In some cases, the nodes and edges can have properties attached to them which add additional informative material that makes it richer, that's called a property graph. There are two principle use cases for graph databases. There's semantic property graphs, which are use to break down human language texts into the semantic structures. Then you can search it, organize it and answer complicated questions. A lot of AI is aimed at semantic graphs. Another kind is the property graph that I just mentioned, which has a dazzling number of use cases. I want to just point out as I talk about this, people are probably wondering, well, we have relation databases, isn't that good enough? So a relational database defines... It supports what I call definitional relationships. That means you define the relationships in a fixed structure. The database drops into that structure, there's a value, foreign key value, that relates one table to another and that value is fixed. You don't change it. If you change it, the database becomes unstable, it's not clear what you're looking at. In a graph database, the system is designed to handle change so that it can reflect the true state of the things that it's being used to track. So let me just give you some examples of use cases for this. They include entity resolution, data lineage, social media analysis, Customer 360, fraud prevention. There's cybersecurity, there's strong supply chain is a big one actually. There is explainable AI and this is going to become important too because a lot of people are adopting AI. But they want a system after the fact to say, how do the AI system come to that conclusion? How did it make that recommendation? Right now we don't have really good ways of tracking that. Machine learning in general, social network, I already mentioned that. And then we've got, oh gosh, we've got data governance, data compliance, risk management. We've got recommendation, we've got personalization, anti money laundering, that's another big one, identity and access management, network and IT operations is already becoming a key one where you actually have mapped out your operation, you know, whatever it is, your data center and you can track what's going on as things happen there, root cause analysis, fraud detection is a huge one. A number of major credit card companies use graph databases for fraud detection, risk analysis, tracking and tracing turn analysis, next best action, what if analysis, impact analysis, entity resolution and I would add one other thing or just a few other things to this list, metadata management. So Sanjeev, here you go, this is your engine. Because I was in metadata management for quite a while in my past life. And one of the things I found was that none of the data management technologies that were available to us could efficiently handle metadata because of the kinds of structures that result from it, but graphs can, okay? Graphs can do things like say, this term in this context means this, but in that context, it means that, okay? Things like that. And in fact, logistics management, supply chain. And also because it handles recursive relationships, by recursive relationships I mean objects that own other objects that are of the same type. You can do things like build materials, you know, so like parts explosion. Or you can do an HR analysis, who reports to whom, how many levels up the chain and that kind of thing. You can do that with relational databases, but yet it takes a lot of programming. In fact, you can do almost any of these things with relational databases, but the problem is, you have to program it. It's not supported in the database. And whenever you have to program something, that means you can't trace it, you can't define it. You can't publish it in terms of its functionality and it's really, really hard to maintain over time. >> Carl, thank you. I wonder if we could bring Brad in, I mean. Brad, I'm sitting here wondering, okay, is this incremental to the market? Is it disruptive and replacement? What are your thoughts on this phase? >> It's already disrupted the market. I mean, like Carl said, go to any bank and ask them are you using graph databases to get fraud detection under control? And they'll say, absolutely, that's the only way to solve this problem. And it is frankly. And it's the only way to solve a lot of the problems that Carl mentioned. And that is, I think it's Achilles heel in some ways. Because, you know, it's like finding the best way to cross the seven bridges of Koenigsberg. You know, it's always going to kind of be tied to those use cases because it's really special and it's really unique and because it's special and it's unique, it's still unfortunately kind of stands apart from the rest of the community that's building, let's say AI outcomes, as a great example here. Graph databases and AI, as Carl mentioned, are like chocolate and peanut butter. But technologically, you think don't know how to talk to one another, they're completely different. And you know, you can't just stand up SQL and query them. You've got to learn, know what is the Carl? Specter special. Yeah, thank you to, to actually get to the data in there. And if you're going to scale that data, that graph database, especially a property graph, if you're going to do something really complex, like try to understand you know, all of the metadata in your organization, you might just end up with, you know, a graph database winter like we had the AI winter simply because you run out of performance to make the thing happen. So, I think it's already disrupted, but we need to like treat it like a first-class citizen in the data analytics and AI community. We need to bring it into the fold. We need to equip it with the tools it needs to do the magic it does and to do it not just for specialized use cases, but for everything. 'Cause I'm with Carl. I think it's absolutely revolutionary. >> Brad identified the principal, Achilles' heel of the technology which is scaling. When these things get large and complex enough that they spill over what a single server can handle, you start to have difficulties because the relationships span things that have to be resolved over a network and then you get network latency and that slows the system down. So that's still a problem to be solved. >> Sanjeev, any quick thoughts on this? I mean, I think metadata on the word cloud is going to be the largest font, but what are your thoughts here? >> I want to (indistinct) So people don't associate me with only metadata, so I want to talk about something slightly different. dbengines.com has done an amazing job. I think almost everyone knows that they chronicle all the major databases that are in use today. In January of 2022, there are 381 databases on a ranked list of databases. The largest category is RDBMS. The second largest category is actually divided into two property graphs and IDF graphs. These two together make up the second largest number databases. So talking about Achilles heel, this is a problem. The problem is that there's so many graph databases to choose from. They come in different shapes and forms. To Brad's point, there's so many query languages in RDBMS, in SQL. I know the story, but here We've got cipher, we've got gremlin, we've got GQL and then we're proprietary languages. So I think there's a lot of disparity in this space. >> Well, excellent. All excellent points, Sanjeev, if I must say. And that is a problem that the languages need to be sorted and standardized. People need to have a roadmap as to what they can do with it. Because as you say, you can do so many things. And so many of those things are unrelated that you sort of say, well, what do we use this for? And I'm reminded of the saying I learned a bunch of years ago. And somebody said that the digital computer is the only tool man has ever device that has no particular purpose. (panelists chuckle) >> All right guys, we got to move on to Dave Menninger. We've heard about streaming. Your prediction is in that realm, so please take it away. >> Sure. So I like to say that historical databases are going to become a thing of the past. By that I don't mean that they're going to go away, that's not my point. I mean, we need historical databases, but streaming data is going to become the default way in which we operate with data. So in the next say three to five years, I would expect that data platforms and we're using the term data platforms to represent the evolution of databases and data lakes, that the data platforms will incorporate these streaming capabilities. We're going to process data as it streams into an organization and then it's going to roll off into historical database. So historical databases don't go away, but they become a thing of the past. They store the data that occurred previously. And as data is occurring, we're going to be processing it, we're going to be analyzing it, we're going to be acting on it. I mean we only ever ended up with historical databases because we were limited by the technology that was available to us. Data doesn't occur in patches. But we processed it in patches because that was the best we could do. And it wasn't bad and we've continued to improve and we've improved and we've improved. But streaming data today is still the exception. It's not the rule, right? There are projects within organizations that deal with streaming data. But it's not the default way in which we deal with data yet. And so that's my prediction is that this is going to change, we're going to have streaming data be the default way in which we deal with data and how you label it and what you call it. You know, maybe these databases and data platforms just evolved to be able to handle it. But we're going to deal with data in a different way. And our research shows that already, about half of the participants in our analytics and data benchmark research, are using streaming data. You know, another third are planning to use streaming technologies. So that gets us to about eight out of 10 organizations need to use this technology. And that doesn't mean they have to use it throughout the whole organization, but it's pretty widespread in its use today and has continued to grow. If you think about the consumerization of IT, we've all been conditioned to expect immediate access to information, immediate responsiveness. You know, we want to know if an item is on the shelf at our local retail store and we can go in and pick it up right now. You know, that's the world we live in and that's spilling over into the enterprise IT world We have to provide those same types of capabilities. So that's my prediction, historical databases become a thing of the past, streaming data becomes the default way in which we operate with data. >> All right thank you David. Well, so what say you, Carl, the guy who has followed historical databases for a long time? >> Well, one thing actually, every database is historical because as soon as you put data in it, it's now history. They'll no longer reflect the present state of things. But even if that history is only a millisecond old, it's still history. But I would say, I mean, I know you're trying to be a little bit provocative in saying this Dave 'cause you know, as well as I do that people still need to do their taxes, they still need to do accounting, they still need to run general ledger programs and things like that. That all involves historical data. That's not going to go away unless you want to go to jail. So you're going to have to deal with that. But as far as the leading edge functionality, I'm totally with you on that. And I'm just, you know, I'm just kind of wondering if this requires a change in the way that we perceive applications in order to truly be manifested and rethinking the way applications work. Saying that an application should respond instantly, as soon as the state of things changes. What do you say about that? >> I think that's true. I think we do have to think about things differently. It's not the way we designed systems in the past. We're seeing more and more systems designed that way. But again, it's not the default. And I agree 100% with you that we do need historical databases you know, that's clear. And even some of those historical databases will be used in conjunction with the streaming data, right? >> Absolutely. I mean, you know, let's take the data warehouse example where you're using the data warehouse as its context and the streaming data as the present and you're saying, here's the sequence of things that's happening right now. Have we seen that sequence before? And where? What does that pattern look like in past situations? And can we learn from that? >> So Tony Baer, I wonder if you could comment? I mean, when you think about, you know, real time inferencing at the edge, for instance, which is something that a lot of people talk about, a lot of what we're discussing here in this segment, it looks like it's got a great potential. What are your thoughts? >> Yeah, I mean, I think you nailed it right. You know, you hit it right on the head there. Which is that, what I'm seeing is that essentially. Then based on I'm going to split this one down the middle is that I don't see that basically streaming is the default. What I see is streaming and basically and transaction databases and analytics data, you know, data warehouses, data lakes whatever are converging. And what allows us technically to converge is cloud native architecture, where you can basically distribute things. So you can have a node here that's doing the real-time processing, that's also doing... And this is where it leads in or maybe doing some of that real time predictive analytics to take a look at, well look, we're looking at this customer journey what's happening with what the customer is doing right now and this is correlated with what other customers are doing. So the thing is that in the cloud, you can basically partition this and because of basically the speed of the infrastructure then you can basically bring these together and kind of orchestrate them sort of a loosely coupled manner. The other parts that the use cases are demanding, and this is part of it goes back to what Dave is saying. Is that, you know, when you look at Customer 360, when you look at let's say Smart Utility products, when you look at any type of operational problem, it has a real time component and it has an historical component. And having predictive and so like, you know, my sense here is that technically we can bring this together through the cloud. And I think the use case is that we can apply some real time sort of predictive analytics on these streams and feed this into the transactions so that when we make a decision in terms of what to do as a result of a transaction, we have this real-time input. >> Sanjeev, did you have a comment? >> Yeah, I was just going to say that to Dave's point, you know, we have to think of streaming very different because in the historical databases, we used to bring the data and store the data and then we used to run rules on top, aggregations and all. But in case of streaming, the mindset changes because the rules are normally the inference, all of that is fixed, but the data is constantly changing. So it's a completely reversed way of thinking and building applications on top of that. >> So Dave Menninger, there seem to be some disagreement about the default. What kind of timeframe are you thinking about? Is this end of decade it becomes the default? What would you pin? >> I think around, you know, between five to 10 years, I think this becomes the reality. >> I think its... >> It'll be more and more common between now and then, but it becomes the default. And I also want Sanjeev at some point, maybe in one of our subsequent conversations, we need to talk about governing streaming data. 'Cause that's a whole nother set of challenges. >> We've also talked about it rather in two dimensions, historical and streaming, and there's lots of low latency, micro batch, sub-second, that's not quite streaming, but in many cases its fast enough and we're seeing a lot of adoption of near real time, not quite real-time as good enough for many applications. (indistinct cross talk from panelists) >> Because nobody's really taking the hardware dimension (mumbles). >> That'll just happened, Carl. (panelists laughing) >> So near real time. But maybe before you lose the customer, however we define that, right? Okay, let's move on to Brad. Brad, you want to talk about automation, AI, the pipeline people feel like, hey, we can just automate everything. What's your prediction? >> Yeah I'm an AI aficionados so apologies in advance for that. But, you know, I think that we've been seeing automation play within AI for some time now. And it's helped us do a lot of things especially for practitioners that are building AI outcomes in the enterprise. It's helped them to fill skills gaps, it's helped them to speed development and it's helped them to actually make AI better. 'Cause it, you know, in some ways provide some swim lanes and for example, with technologies like AutoML can auto document and create that sort of transparency that we talked about a little bit earlier. But I think there's an interesting kind of conversion happening with this idea of automation. And that is that we've had the automation that started happening for practitioners, it's trying to move out side of the traditional bounds of things like I'm just trying to get my features, I'm just trying to pick the right algorithm, I'm just trying to build the right model and it's expanding across that full life cycle, building an AI outcome, to start at the very beginning of data and to then continue on to the end, which is this continuous delivery and continuous automation of that outcome to make sure it's right and it hasn't drifted and stuff like that. And because of that, because it's become kind of powerful, we're starting to actually see this weird thing happen where the practitioners are starting to converge with the users. And that is to say that, okay, if I'm in Tableau right now, I can stand up Salesforce Einstein Discovery, and it will automatically create a nice predictive algorithm for me given the data that I pull in. But what's starting to happen and we're seeing this from the companies that create business software, so Salesforce, Oracle, SAP, and others is that they're starting to actually use these same ideals and a lot of deep learning (chuckles) to basically stand up these out of the box flip-a-switch, and you've got an AI outcome at the ready for business users. And I am very much, you know, I think that's the way that it's going to go and what it means is that AI is slowly disappearing. And I don't think that's a bad thing. I think if anything, what we're going to see in 2022 and maybe into 2023 is this sort of rush to put this idea of disappearing AI into practice and have as many of these solutions in the enterprise as possible. You can see, like for example, SAP is going to roll out this quarter, this thing called adaptive recommendation services, which basically is a cold start AI outcome that can work across a whole bunch of different vertical markets and use cases. It's just a recommendation engine for whatever you needed to do in the line of business. So basically, you're an SAP user, you look up to turn on your software one day, you're a sales professional let's say, and suddenly you have a recommendation for customer churn. Boom! It's going, that's great. Well, I don't know, I think that's terrifying. In some ways I think it is the future that AI is going to disappear like that, but I'm absolutely terrified of it because I think that what it really does is it calls attention to a lot of the issues that we already see around AI, specific to this idea of what we like to call at Omdia, responsible AI. Which is, you know, how do you build an AI outcome that is free of bias, that is inclusive, that is fair, that is safe, that is secure, that its audible, et cetera, et cetera, et cetera, et cetera. I'd take a lot of work to do. And so if you imagine a customer that's just a Salesforce customer let's say, and they're turning on Einstein Discovery within their sales software, you need some guidance to make sure that when you flip that switch, that the outcome you're going to get is correct. And that's going to take some work. And so, I think we're going to see this move, let's roll this out and suddenly there's going to be a lot of problems, a lot of pushback that we're going to see. And some of that's going to come from GDPR and others that Sanjeev was mentioning earlier. A lot of it is going to come from internal CSR requirements within companies that are saying, "Hey, hey, whoa, hold up, we can't do this all at once. "Let's take the slow route, "let's make AI automated in a smart way." And that's going to take time. >> Yeah, so a couple of predictions there that I heard. AI simply disappear, it becomes invisible. Maybe if I can restate that. And then if I understand it correctly, Brad you're saying there's a backlash in the near term. You'd be able to say, oh, slow down. Let's automate what we can. Those attributes that you talked about are non trivial to achieve, is that why you're a bit of a skeptic? >> Yeah. I think that we don't have any sort of standards that companies can look to and understand. And we certainly, within these companies, especially those that haven't already stood up an internal data science team, they don't have the knowledge to understand when they flip that switch for an automated AI outcome that it's going to do what they think it's going to do. And so we need some sort of standard methodology and practice, best practices that every company that's going to consume this invisible AI can make use of them. And one of the things that you know, is sort of started that Google kicked off a few years back that's picking up some momentum and the companies I just mentioned are starting to use it is this idea of model cards where at least you have some transparency about what these things are doing. You know, so like for the SAP example, we know, for example, if it's convolutional neural network with a long, short term memory model that it's using, we know that it only works on Roman English and therefore me as a consumer can say, "Oh, well I know that I need to do this internationally. "So I should not just turn this on today." >> Thank you. Carl could you add anything, any context here? >> Yeah, we've talked about some of the things Brad mentioned here at IDC and our future of intelligence group regarding in particular, the moral and legal implications of having a fully automated, you know, AI driven system. Because we already know, and we've seen that AI systems are biased by the data that they get, right? So if they get data that pushes them in a certain direction, I think there was a story last week about an HR system that was recommending promotions for White people over Black people, because in the past, you know, White people were promoted and more productive than Black people, but it had no context as to why which is, you know, because they were being historically discriminated, Black people were being historically discriminated against, but the system doesn't know that. So, you know, you have to be aware of that. And I think that at the very least, there should be controls when a decision has either a moral or legal implication. When you really need a human judgment, it could lay out the options for you. But a person actually needs to authorize that action. And I also think that we always will have to be vigilant regarding the kind of data we use to train our systems to make sure that it doesn't introduce unintended biases. In some extent, they always will. So we'll always be chasing after them. But that's (indistinct). >> Absolutely Carl, yeah. I think that what you have to bear in mind as a consumer of AI is that it is a reflection of us and we are a very flawed species. And so if you look at all of the really fantastic, magical looking supermodels we see like GPT-3 and four, that's coming out, they're xenophobic and hateful because the people that the data that's built upon them and the algorithms and the people that build them are us. So AI is a reflection of us. We need to keep that in mind. >> Yeah, where the AI is biased 'cause humans are biased. All right, great. All right let's move on. Doug you mentioned mentioned, you know, lot of people that said that data lake, that term is not going to live on but here's to be, have some lakes here. You want to talk about lake house, bring it on. >> Yes, I do. My prediction is that lake house and this idea of a combined data warehouse and data lake platform is going to emerge as the dominant data management offering. I say offering that doesn't mean it's going to be the dominant thing that organizations have out there, but it's going to be the pro dominant vendor offering in 2022. Now heading into 2021, we already had Cloudera, Databricks, Microsoft, Snowflake as proponents, in 2021, SAP, Oracle, and several of all of these fabric virtualization/mesh vendors joined the bandwagon. The promise is that you have one platform that manages your structured, unstructured and semi-structured information. And it addresses both the BI analytics needs and the data science needs. The real promise there is simplicity and lower cost. But I think end users have to answer a few questions. The first is, does your organization really have a center of data gravity or is the data highly distributed? Multiple data warehouses, multiple data lakes, on premises, cloud. If it's very distributed and you'd have difficulty consolidating and that's not really a goal for you, then maybe that single platform is unrealistic and not likely to add value to you. You know, also the fabric and virtualization vendors, the mesh idea, that's where if you have this highly distributed situation, that might be a better path forward. The second question, if you are looking at one of these lake house offerings, you are looking at consolidating, simplifying, bringing together to a single platform. You have to make sure that it meets both the warehouse need and the data lake need. So you have vendors like Databricks, Microsoft with Azure Synapse. New really to the data warehouse space and they're having to prove that these data warehouse capabilities on their platforms can meet the scaling requirements, can meet the user and query concurrency requirements. Meet those tight SLS. And then on the other hand, you have the Oracle, SAP, Snowflake, the data warehouse folks coming into the data science world, and they have to prove that they can manage the unstructured information and meet the needs of the data scientists. I'm seeing a lot of the lake house offerings from the warehouse crowd, managing that unstructured information in columns and rows. And some of these vendors, Snowflake a particular is really relying on partners for the data science needs. So you really got to look at a lake house offering and make sure that it meets both the warehouse and the data lake requirement. >> Thank you Doug. Well Tony, if those two worlds are going to come together, as Doug was saying, the analytics and the data science world, does it need to be some kind of semantic layer in between? I don't know. Where are you in on this topic? >> (chuckles) Oh, didn't we talk about data fabrics before? Common metadata layer (chuckles). Actually, I'm almost tempted to say let's declare victory and go home. And that this has actually been going on for a while. I actually agree with, you know, much of what Doug is saying there. Which is that, I mean I remember as far back as I think it was like 2014, I was doing a study. I was still at Ovum, (indistinct) Omdia, looking at all these specialized databases that were coming up and seeing that, you know, there's overlap at the edges. But yet, there was still going to be a reason at the time that you would have, let's say a document database for JSON, you'd have a relational database for transactions and for data warehouse and you had basically something at that time that resembles a dupe for what we consider your data life. Fast forward and the thing is what I was seeing at the time is that you were saying they sort of blending at the edges. That was saying like about five to six years ago. And the lake house is essentially on the current manifestation of that idea. There is a dichotomy in terms of, you know, it's the old argument, do we centralize this all you know in a single place or do we virtualize? And I think it's always going to be a union yeah and there's never going to be a single silver bullet. I do see that there are also going to be questions and these are points that Doug raised. That you know, what do you need for your performance there, or for your free performance characteristics? Do you need for instance high concurrency? You need the ability to do some very sophisticated joins, or is your requirement more to be able to distribute and distribute our processing is, you know, as far as possible to get, you know, to essentially do a kind of a brute force approach. All these approaches are valid based on the use case. I just see that essentially that the lake house is the culmination of it's nothing. It's a relatively new term introduced by Databricks a couple of years ago. This is the culmination of basically what's been a long time trend. And what we see in the cloud is that as we start seeing data warehouses as a check box items say, "Hey, we can basically source data in cloud storage, in S3, "Azure Blob Store, you know, whatever, "as long as it's in certain formats, "like, you know parquet or CSP or something like that." I see that as becoming kind of a checkbox item. So to that extent, I think that the lake house, depending on how you define is already reality. And in some cases, maybe new terminology, but not a whole heck of a lot new under the sun. >> Yeah. And Dave Menninger, I mean a lot of these, thank you Tony, but a lot of this is going to come down to, you know, vendor marketing, right? Some people just kind of co-op the term, we talked about you know, data mesh washing, what are your thoughts on this? (laughing) >> Yeah, so I used the term data platform earlier. And part of the reason I use that term is that it's more vendor neutral. We've tried to sort of stay out of the vendor terminology patenting world, right? Whether the term lake houses, what sticks or not, the concept is certainly going to stick. And we have some data to back it up. About a quarter of organizations that are using data lakes today, already incorporate data warehouse functionality into it. So they consider their data lake house and data warehouse one in the same, about a quarter of organizations, a little less, but about a quarter of organizations feed the data lake from the data warehouse and about a quarter of organizations feed the data warehouse from the data lake. So it's pretty obvious that three quarters of organizations need to bring this stuff together, right? The need is there, the need is apparent. The technology is going to continue to converge. I like to talk about it, you know, you've got data lakes over here at one end, and I'm not going to talk about why people thought data lakes were a bad idea because they thought you just throw stuff in a server and you ignore it, right? That's not what a data lake is. So you've got data lake people over here and you've got database people over here, data warehouse people over here, database vendors are adding data lake capabilities and data lake vendors are adding data warehouse capabilities. So it's obvious that they're going to meet in the middle. I mean, I think it's like Tony says, I think we should declare victory and go home. >> As hell. So just a follow-up on that, so are you saying the specialized lake and the specialized warehouse, do they go away? I mean, Tony data mesh practitioners would say or advocates would say, well, they could all live. It's just a node on the mesh. But based on what Dave just said, are we gona see those all morphed together? >> Well, number one, as I was saying before, there's always going to be this sort of, you know, centrifugal force or this tug of war between do we centralize the data, do we virtualize? And the fact is I don't think that there's ever going to be any single answer. I think in terms of data mesh, data mesh has nothing to do with how you're physically implement the data. You could have a data mesh basically on a data warehouse. It's just that, you know, the difference being is that if we use the same physical data store, but everybody's logically you know, basically governing it differently, you know? Data mesh in space, it's not a technology, it's processes, it's governance process. So essentially, you know, I basically see that, you know, as I was saying before that this is basically the culmination of a long time trend we're essentially seeing a lot of blurring, but there are going to be cases where, for instance, if I need, let's say like, Upserve, I need like high concurrency or something like that. There are certain things that I'm not going to be able to get efficiently get out of a data lake. And, you know, I'm doing a system where I'm just doing really brute forcing very fast file scanning and that type of thing. So I think there always will be some delineations, but I would agree with Dave and with Doug, that we are seeing basically a confluence of requirements that we need to essentially have basically either the element, you know, the ability of a data lake and the data warehouse, these need to come together, so I think. >> I think what we're likely to see is organizations look for a converge platform that can handle both sides for their center of data gravity, the mesh and the fabric virtualization vendors, they're all on board with the idea of this converged platform and they're saying, "Hey, we'll handle all the edge cases "of the stuff that isn't in that center of data gravity "but that is off distributed in a cloud "or at a remote location." So you can have that single platform for the center of your data and then bring in virtualization, mesh, what have you, for reaching out to the distributed data. >> As Dave basically said, people are happy when they virtualized data. >> I think we have at this point, but to Dave Menninger's point, they are converging, Snowflake has introduced support for unstructured data. So obviously literally splitting here. Now what Databricks is saying is that "aha, but it's easy to go from data lake to data warehouse "than it is from databases to data lake." So I think we're getting into semantics, but we're already seeing these two converge. >> So take somebody like AWS has got what? 15 data stores. Are they're going to 15 converge data stores? This is going to be interesting to watch. All right, guys, I'm going to go down and list do like a one, I'm going to one word each and you guys, each of the analyst, if you would just add a very brief sort of course correction for me. So Sanjeev, I mean, governance is going to to be... Maybe it's the dog that wags the tail now. I mean, it's coming to the fore, all this ransomware stuff, which you really didn't talk much about security, but what's the one word in your prediction that you would leave us with on governance? >> It's going to be mainstream. >> Mainstream. Okay. Tony Baer, mesh washing is what I wrote down. That's what we're going to see in 2022, a little reality check, you want to add to that? >> Reality check, 'cause I hope that no vendor jumps the shark and close they're offering a data niche product. >> Yeah, let's hope that doesn't happen. If they do, we're going to call them out. Carl, I mean, graph databases, thank you for sharing some high growth metrics. I know it's early days, but magic is what I took away from that, so magic database. >> Yeah, I would actually, I've said this to people too. I kind of look at it as a Swiss Army knife of data because you can pretty much do anything you want with it. That doesn't mean you should. I mean, there's definitely the case that if you're managing things that are in fixed schematic relationship, probably a relation database is a better choice. There are times when the document database is a better choice. It can handle those things, but maybe not. It may not be the best choice for that use case. But for a great many, especially with the new emerging use cases I listed, it's the best choice. >> Thank you. And Dave Menninger, thank you by the way, for bringing the data in, I like how you supported all your comments with some data points. But streaming data becomes the sort of default paradigm, if you will, what would you add? >> Yeah, I would say think fast, right? That's the world we live in, you got to think fast. >> Think fast, love it. And Brad Shimmin, love it. I mean, on the one hand I was saying, okay, great. I'm afraid I might get disrupted by one of these internet giants who are AI experts. I'm going to be able to buy instead of build AI. But then again, you know, I've got some real issues. There's a potential backlash there. So give us your bumper sticker. >> I'm would say, going with Dave, think fast and also think slow to talk about the book that everyone talks about. I would say really that this is all about trust, trust in the idea of automation and a transparent and visible AI across the enterprise. And verify, verify before you do anything. >> And then Doug Henschen, I mean, I think the trend is your friend here on this prediction with lake house is really becoming dominant. I liked the way you set up that notion of, you know, the data warehouse folks coming at it from the analytics perspective and then you get the data science worlds coming together. I still feel as though there's this piece in the middle that we're missing, but your, your final thoughts will give you the (indistinct). >> I think the idea of consolidation and simplification always prevails. That's why the appeal of a single platform is going to be there. We've already seen that with, you know, DoOP platforms and moving toward cloud, moving toward object storage and object storage, becoming really the common storage point for whether it's a lake or a warehouse. And that second point, I think ESG mandates are going to come in alongside GDPR and things like that to up the ante for good governance. >> Yeah, thank you for calling that out. Okay folks, hey that's all the time that we have here, your experience and depth of understanding on these key issues on data and data management really on point and they were on display today. I want to thank you for your contributions. Really appreciate your time. >> Enjoyed it. >> Thank you. >> Thanks for having me. >> In addition to this video, we're going to be making available transcripts of the discussion. We're going to do clips of this as well we're going to put them out on social media. I'll write this up and publish the discussion on wikibon.com and siliconangle.com. No doubt, several of the analysts on the panel will take the opportunity to publish written content, social commentary or both. I want to thank the power panelists and thanks for watching this special CUBE presentation. This is Dave Vellante, be well and we'll see you next time. (bright music)
SUMMARY :
and I'd like to welcome you to I as moderator, I'm going to and that is the journey to weigh in on there, and it's going to demand more solid data. Brad, I wonder if you that are specific to individual use cases in the past is because we I like the fact that you the data from, you know, Dave Menninger, I mean, one of the things that all need to be managed collectively. Oh thank you Dave. and to the community I think we could have a after the fact to say, okay, is this incremental to the market? the magic it does and to do it and that slows the system down. I know the story, but And that is a problem that the languages move on to Dave Menninger. So in the next say three to five years, the guy who has followed that people still need to do their taxes, And I agree 100% with you and the streaming data as the I mean, when you think about, you know, and because of basically the all of that is fixed, but the it becomes the default? I think around, you know, but it becomes the default. and we're seeing a lot of taking the hardware dimension That'll just happened, Carl. Okay, let's move on to Brad. And that is to say that, Those attributes that you And one of the things that you know, Carl could you add in the past, you know, I think that what you have to bear in mind that term is not going to and the data science needs. and the data science world, You need the ability to do lot of these, thank you Tony, I like to talk about it, you know, It's just a node on the mesh. basically either the element, you know, So you can have that single they virtualized data. "aha, but it's easy to go from I mean, it's coming to the you want to add to that? I hope that no vendor Yeah, let's hope that doesn't happen. I've said this to people too. I like how you supported That's the world we live I mean, on the one hand I And verify, verify before you do anything. I liked the way you set up We've already seen that with, you know, the time that we have here, We're going to do clips of this as well
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Menninger | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Brad Shimmin | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Dave Velannte | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Brad | PERSON | 0.99+ |
Carl Olofson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
January of 2022 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
381 databases | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Sanjeev | PERSON | 0.99+ |
2021 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Omdia | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
SanjMo | ORGANIZATION | 0.99+ |
79% | QUANTITY | 0.99+ |
second question | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
15 data stores | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
Chris McNabb & Ed Macosky, Boomi | Hyperautomation & The Future of Connectivity
(energetic music) >> Hello, welcome to the CUBE's coverage of Boomi's Out of This World event. I'm John Furrier, host of theCUBE. We've got two great guests here, Chris McNabb, CEO of Boomi, and Ed Macosky, SVP and Head of Products, talking about hyper automation and the future of connectivity. Gentlemen, thank you for coming on theCUBE, great to see you. >> John, it is great to see you again as well. Looking forward to the next in-person one. >> I miss the in-person events, you guys have had great events and a lot of action happening. Love the big news of going out on your own direction, big financing, change of control, all that good stuff happening, industries growing. Chris, this is a big move. You know, the industry is changing. Can you give us some context to, you know, what's going on in automation and connectivity, because iPaaS, which you guys have pioneered, have been a big part of Cloud and CloudScale, and now we're seeing next-generation things happening. Data, automation, edge, modern application development, all happening. Set some context, what's going on? >> Yeah John, listen, it's a great time to be in our space at this point in time. Our customers, at the end of the day, are looking to create what we announced at last year's thing, called Integrated Experiences, which is the combination of user engagement, more awesome connectivity, and making sure high quality data goes through that experience, and providing 21st century experiences. And we're right at the heart of that work. Our platform really drives all the services that are needed there. But what our customers really need and what we're here to focus on today, that this world is to make sure that we have the world's best cut connectivity capabilities, and process automation engagement of constituents to really do what they want to do, where they want to do it. >> So a lot of big moves happening, what's the story? Take us through the story. I mean, you guys have a transaction with big sum financing, setting up this intelligence connectivity and automation approach. Take us through the story, what happened? >> Yeah. So, you know, the lead business was sold outside of Dell and that deal closed. We are now owned by two top tier private equity firms, FP and TPG. That sale is completed and now we are ready to unleash the Boomi business on this market. I think it's a great, it's a great transaction for Dell, and it's a great transaction for FP, PTG but most specifically, it's really a world-class transaction for the Boomi business, the Boomi customer base, as well as the Boomi employee. So I really looked at this as a win-win-win and sets us up for really going after this one. >> Yeah, and there's a huge wave coming and you're seeing like the, the big wave coming. It's just like, no need to debate it. It's here. It's cloud 2.0, whatever you want to call it, it's scale. IT has completely figured out, that not only is replatforming the cloud, but you got to be in the cloud refactoring. This is driving the innovation. And, this is really I see where you guys are leading. So share with me what is hyper automation? What is that actually mean? >> So what hyper automation really is, is intelligent connectivity automation. So our customers have been doing this. It's very specifically related to taking workflows, taking automation within the business. That's been around for a long time anyway, but adding AI and ML to it. So, as you continue to automate your business, you're getting more and more steam, and you get more and more productivity out of the (mumbles) organization or productivity from the (mumbles). >> So Chris, tell us more about this hyper automation, because you guys have a large install base. Take us through some of the numbers of the customer base, and where the dots are connecting as they look at the new IT landscape as it transforms. >> Yeah. John, great question. You know, when I talk to, you know as many of our 18,000 customers worldwide as I can get to, you know, what they are saying very clearly is their IT news feed is getting more complicated, more distributed, more siloed, and it has more data. And as you work through that problem, what they're trying to accomplish, is they're trying to engage their constituents in a 21st century web, however they want, whether it be mobile web, portals, chat bots, old fashioned telephones. And in doing that, that complicated area is extraordinarily difficult. So that's the pervasive problem that Boomi is purpose-built to help solve. And our customers start out sometimes with just great connectivity. Hyper automation is where the real value comes in. That's where your constituents see a complete difference in how I inter-operate with (mumbles). >> So, first of all, I love the word hyper automation because it reminds me of hyper scale, which, you know, look at the Amazons and the cloud players. You know, that kind of game has kind of evolved. I mean, the old joke is what inning are we in, right? And, and I, to use a baseball metaphor, I think it's a doubleheader and game one is won by the cloud. Right? So, Amazon wins game one, game two is all about data. You guys, this is core to Boomi and I want to get your thoughts on this because data is the competitive advantage. But if you look at the pandemic and the stories that we're reporting on, and this reinvent specifically, that'll be a big story. The refactoring in the cloud is a big strategic effort, not just replatforming, refactoring in the cloud. So this is really where you guys are, I think, skating where the puck is. Am I getting it right, can you just share that vision? >> Yeah, John. From a vision perspective, I think the pandemic has really accelerated people's expectations. You know what we need to be more nimble, more flexible. And because they had a fair amount in the Cloud they have to understand what is the next tier, what is the next generation offerings that we put together tie together and connect. That is not only connecting systems, apps, databases, and clouds. You're connecting people, processes and devices. So we're going to have a great story here and out of this world about how we connect bio centric vest to a video system who a network monitoring hub to protect the officer's safety in Amsterdam in real-time. We can deploy officers to location all automatic. All decisions are automatic, all locations, cameras (mumbles) all automatically. And that's only possible, when we think about next generation technology that Boomi provides. Next generation capabilities by the other providers in that solution. >> Ed, before we get to the product announcements for the even, we'll get your reaction to that. I see in the cloud you can refactor, you got data, you got latency issues. These are all kind of go away when you start thinking about integrating it altogether. What's your reaction to refactoring as the next step? >> Yeah. So my regular, I mean, exactly what Chris said, but as our customers are moving to the cloud, they're not choosing any more, just one cloud. It is a multi-cloud it's multidimensional (mumbles), you got multi-cloud, you got hybrid cloud, you have edge devices, et cetera. And our technology just naturally puts this in the space to do that. And based on what we see with our customers, we actually have, we've connected over 189,000 different devices, application points, data endpoints, et cetera to people. And we're seeing that growth of 44% year on year. So, we're seeing that explosion in helping customers, and we just want to accelerate that, and help them react to these changes as quickly as they possibly can. And a lot of it doesn't require, you know, massive upload project technology. We've been lucky enough to be visionaries that with our deployment technology, being able to embrace this new environment that's coming up or we're right at the forefront of this (mumbles). >> Yeah. I love the intelligence saying, I love hyper automation. Okay, let's get into the product announcements of Out of This World event. What are some of the announcements, and share with us the key highlights. >> Yeah. So first and foremost, we've announced a vision in our tactic. So I talked about the 189,000 applications that we did data endpoints, et cetera, that our customers are picking today. And they're moving very, very rapidly with that and it's no longer about name, connections, and having these fixed auxiliary that connects to applications you need to be able to react intelligently, pick the next endpoint and connect very quickly and bring that into your ecosystem. So we've got this vision towards the connectivity service that we're working on that will basically normalize that connectivity across all of the applications that are plugging into Boomi's iPaaS ecosystem and allow customers to get up and running very quickly. So I'm really excited about that. The other thing we announced is Boomi event streams. So in order to complete this, we can't just, we've been on this EDA journey Event-Driven Architecture for the last couple of years, and embracing an open ecosystem. But we found that in order to go faster for our customers, it's very, very important that we bring this into Boomi's iPaaS platform. Our partnerships in this area are still very important for us. But there is an avenue that our customers are demanding that, "hey, bring us into your platform." And we need to move faster with this, and our new Boomi event streams will allow them to do that. We also recently just announced the Boomi Discover Catalog. So this is the, this is an ongoing vision us. We're, building up into a marketplace where customers and partners can all participate, whether it's inside of a customer's ecosystem or partners, or Boomi, et cetera, offering these quick onboarding solutions for their customers. So we will learn intelligently as people have these solutions to help customers onboard, and build, and connect to these systems faster. So that's kind of how they all come together for us In a hyper automation scenario the last thing too, is we are working on RPA as a last mile connectivity that's where we start RPAs today, you know, gone are going to be the days of having RPA at a desktop perspective where you have to have someone manually run that. Although its RPA our runtime technology extends the desktops anyway. So we are going to bring RPA technology into the IPaaS platform as we move forward here so that our customers can enjoy the benefits of that as well. >> That's real quick. It was going to ask about the fence stream. I love this RPA angle. Tell me more about how that impacts is that's that's what I think, pretty big what's the impact of when you bring robotic processes on our RPA into iPaaS, what's the, what's the impact of the customer? >> The impact of the customer is that we believe that customers can really enjoy true cloud when it comes to RPA technology today, most of the RPA technologies, like I said, are deployed at a desktop and they are, they are manually run by some folks. It helps speed up the business user and adds some value there. But our technology will surely bring it to the cloud and allow that connectivity of what an arm robotic process automation solution will be doing and can tap into the iPaaS ecosystem and extend and connect that data up into the cloud or even other operating systems that the customer (mumbles). >> Okay. So on the event streams that you did, you guys announced, obviously it's the best part of the embedded event driven architecture, You guys have been part of. What is, why is it important for customers? Can you just take a minute to explain why event streams and why event driven approaches are important. >> Because customers need access to the data real time. So, so there's two reasons why it's very important to the customers one is Event Driven Architectures are on the rise, in order to truly scale up an environment. If you're talking tens of millions of transactions, you need to have an Event Driven Architecture in place in order to manage that state. So you don't have any message loss or any of those types of things. So it's important that we continue to invest as we continue to scale on our customers and they scale up their environments with us. The other reason it is very important for us to bring it into our ecosystem, within our platform is that our customers enjoy the luxury of having an integrated experience themselves as they're building, you know, intelligent connectivity and automation solutions within our platform. So to ask a customer, to go work with a third party technology versus enjoying it in an integrated experience itself is why we want to bring it in and have them get their (mumbles) much faster. >> I really think you guys are onto something because it's a partnership world. Ecosystems are now everywhere. There's ecosystems, because everything's a platform now that's evolving from tools to platforms and it's not a one platform rules the world. This is the benefit of how the clouds emerging, almost a whole nother set of cloud capabilities. I love this vision and you start to see that, and you guys did talk about this thing called conductivity marketplace. And what is that? Is that a, is that a place where people are sharing instead of partnershipping? I know there's a lot of partners are connected with each other and they want to have it all automated. How does this all play in? Can you just quickly explain that? >> Yeah, so in the last year we launched and we actually launched open source community around connectors and that sort of thing we invested pretty heavily in RSDK. We see quite a big uptake in the ecosystem of them building specific connectors, as well as solution. And our partners were very excited about partnering with us and (mumbles) to markets and those sorts of things that they can offer solutions to their customers on a marketplace. So, so we are reacting to the popular demand that we have from our partners and customers where they say, Hey, we'd love to participate in this marketplace. We'd love to be able to work with you and publish solutions that we're delivering more customers. So, so we're, we're fulfilling that mission on behalf of our customers and partners. >> You know, Chris, when you look at the cloud native ecosystem at the high level, you're seeing opensource driving a big part of it, large enterprises, large customers are moving to that next level of modern application development. They're partnering, right? They're going to out, outsource and partner some, some edge components, maybe bring someone else over here, have a supplier everything's confide now in the cloud, AKA dev ops meets, you know, business logic. So this seems to be validated. How do you see this evolving? How does this iPaaS kind of environment just become the environment? I mean, it seems to me that that's what's happening. What's your reaction to the, to that trend? >> I think as iPaaS evolves we've extended the breadth of our iPaaS dramatically. We're not an integration platform. We're, we take the broadest definition of the word integration I guess I'll say it that way. You'll be integrating people. Connecting people is just as important as connecting cloud applications So, you know, that that's part one in terms of the vision of what it is two is going to be the importance of speed and productivity. It's critically important that people can figure out how to reconnect because endpoints are exploding. You have to connect these extraordinarily quickly infractions of the amount of time that it ever took and coding, code is just not the way that that works. You have to have it abstracted and you have to make it simpler, low-code, no-code environments, configuration based environments, make it simpler for more people outside of IT to actually use the solutions. So that's where these platforms become much more pervasive than the enterprise, solve a much bigger problem, and they solve it at speeds. So, you know, the vision for this is just to continue to accelerate that, you know, when we got started here, things used to take months and months, you know, it came down to weeks, it came down to days, it's in to hours. We're looking at seconds to define connectivity in an easy button, those get connected and get working. That's our vision for intelligent connectivity. >> Okay, so we're talking about hyper automation in the future context. That's the segment here? What is a feature conductivity? Take me through that. How does that evolve? I can see marketplace. I can see an ecosystem. I see people connecting with partners and applications and data. What is the future of connectivity? >> The vision, right? For connectivity, and they talk about our connectivity as a service, but you know, you have to think about it as connectivity instead of connectors, like an NBO, a thing that talks to it, and what we look at is like, you should be able to point to an endpoint, pick a cloud app, any cloud application. You have an API. I should be able to automatically programmatically and dynamically, anytime I want go interrogate that, browse it in the button and I've established connectivity, and the amount of take, in the amount of time it's taken me to explain it, you should almost be able to work through it and be connected to that and talking to that endpoint, we're going to bring that kind of connectivity, that dynamic generated, automatic connectivity, in to our platform, and that's the vision >> And people connect to user from a product standpoint and this should be literally plug and play, so to speak, old, old term, but really seamlessly, automated play, automate and play kind of just connect. >> Yes, absolutely. And what Chris was talking, I was thinking about a customer to be named, but one of the, during one of the interviews coming up at Out of this World, the customer was describing to us today, already the capabilities that we have, where he is, a CTO was able to get an integration up and running before this team was able to write the requirements for the integration. So, so those are the types of things we're looking to continue add to, to add to. And we're also, you know, not asking our customers to make a choice. You can scale up and scale down. It's very important for our customers to realize whether the problem's really big or really small our platforms there to get it done fast and in a secure way. >> I see a lot of people integrating in the cloud with each other and themselves other apps, seeing huge benefits while still working on premise across multiple environments. So this kind of new operating models evolving, some people call it refactoring, whatever term you want to use. It's a change of, of a value creation, creates new value. So as you guys go out, Chris, take us through your vision on next steps. Okay. You're, you're going to be independent. You got the financing behind you. Dell got a nice deal. You guys are going forward. What's next for boomi? >> Well, listen John, we, we, you know, we couldn't be more excited having the opportunity to truly unleash, you know, this business out on the market and you know, our employees are super excited. Our customers are going to benefit. Our customers are going to get a lot more product innovation every single day, we are ready to put out 11 releases a year. There's literally a hundred different features we put in that product. We're looking to double down on that and really accelerate our path towards those things what we were talking about today. Engagement with our customers gets to get much better, you know, doubling down on customer success. People support people, PSL in the field gets us engaging our customers in so many different ways. There's so much more folks that when we partner with our customers, we care about their overall success, and this investment really gives us so many avenues now to double down on and making sure that their journey with us and their journey towards their success as a business and how we can help them. Some of them, we help them get there. >> You guys got a lot of trajectory and experience and knowledge in this industry I think. It's really kind of a great position to be in. And as you guys take on this next wave, Chris McNabb, CEO Boomi, Ed Macosky, SVP, head of projects, thanks for coming on the cube, and this is the cube coverage of Boomi's Out of This World. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
and the future of connectivity. to see you again as well. I miss the in-person events, to really do what they want to do, where they want to do it. I mean, you guys have a and now we are ready that not only is replatforming the cloud, and you get more and more productivity numbers of the customer base, that Boomi is purpose-built to help solve. and the stories that we're reporting on, fair amount in the Cloud I see in the cloud you can refactor, And a lot of it doesn't require, you know, What are some of the announcements, and allow customers to get impact of the customer? The impact of the customer event streams that you did, continue to invest as we continue and you guys did talk about and (mumbles) to markets and So this seems to be validated. You have to have it abstracted and you have to make it simpler, low-code, no-code What is the future of connectivity? and the amount of take, plug and play, so to speak, not asking our customers to make a choice. So as you guys go out, Chris, to truly unleash, you know, And as you guys take on this next wave,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Chris McNabb | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Boomi | ORGANIZATION | 0.99+ |
44% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
two reasons | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
189,000 applications | QUANTITY | 0.99+ |
18,000 customers | QUANTITY | 0.99+ |
Boomi | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Ed | PERSON | 0.98+ |
two | QUANTITY | 0.98+ |
tens of millions | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one cloud | QUANTITY | 0.97+ |
game one | QUANTITY | 0.97+ |
one platform | QUANTITY | 0.97+ |
pandemic | EVENT | 0.96+ |
Boomi's Out of This World | EVENT | 0.96+ |
Out of This World | EVENT | 0.95+ |
first | QUANTITY | 0.95+ |
over 189,000 different devices | QUANTITY | 0.95+ |
two great guests | QUANTITY | 0.94+ |
CloudScale | TITLE | 0.93+ |
11 releases a year | QUANTITY | 0.93+ |
one of the interviews | QUANTITY | 0.92+ |
CEO | PERSON | 0.91+ |
game two | QUANTITY | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
Out of This World | TITLE | 0.91+ |
RSDK | TITLE | 0.9+ |
last couple of years | DATE | 0.88+ |
Hyperautomation | ORGANIZATION | 0.87+ |
SVP | PERSON | 0.86+ |
single day | QUANTITY | 0.86+ |
iPaaS | TITLE | 0.85+ |
The Future of Connectivity | ORGANIZATION | 0.82+ |
big wave | EVENT | 0.81+ |
iPaaS | COMMERCIAL_ITEM | 0.79+ |
two top tier private equity firms | QUANTITY | 0.76+ |
Out of this World | ORGANIZATION | 0.75+ |
wave | EVENT | 0.75+ |
Event | TITLE | 0.71+ |
Chris Port & Mandy Dhaliwal, Boomi | Hyperautomation & The Future of Connectivity
>>Hello, welcome to the cubes coverage of Boomi's out of this world. I'm John for your host of the cube at two great guests here, Chris port, chief operating officer of Boomi and Mandy deli wall. Who's the chief marketing officer of Boomi, Chris Mandy. Great to see you. Thanks for coming on the cube. Appreciate it. >>Thank you for having us. >>This segment is really about the Boomi, uh, trend and customers, um, and the success you guys had obviously now on a new trajectory go to the next level. And you got a lot of trajectory and success. Chris, as the chief operating officer, we've talked many times about the customer base, uh, and growing poised perfectly for this next wave. Give us the update on the macro trends around your customer base and how Boomi's helping them and how you're, you're getting your growth. >>Absolutely. And John really look forward to seeing you soon later, yo, again, this is a great moment in time. We're now more than 18,000 customers, 800 partners globally. We've actually seen an acceleration of the business through the pandemic. Obviously people are trying to do more with less. So it's just an amazing time to see what our customers are doing. Obviously, an explosion of SAS applications. And when we started thinking about, you know, explosion of endpoints, explosion of data, and now you compound that with a labor sheet, which you know, I know if anyone's looking at movies hiring more than we've ever hired in our life. And obviously we're all seeing this in space. So now basically a high productivity, high time to value tool set like Boomi is an imperative. It's no longer a luxury. And we're seeing that accelerate where our customers you've heard Chris talking about 189,000 unique importance that we now connect to. That's the power of movie. You do that in minutes and that's happening every single day. So again, just extending our footprint, extending our customer, really taking advantage. We're seeing there are real tailwind for the business right now. So really excited. Are you excited to work with, >>You know, Chris, uh, Dave Volante, we're talking about, you know, how do you tell the next big breakout success is rogue usage, shadow it. When you have rogue users, that means there's some innovation happening and you guys have a lot of customers that are hiring because it's all new, it's all new innovation, Mandy. This is kind of like a marketing opportunity. It's like rogue is not a bad word here. There's new functionality. You guys are showing the market that you go with Boomi, you can get more value. And then new things just emerge. New positions, open up value is being created. It's kind of a sign of value. Not, not a negative. It's a positive. >>Yeah, absolutely. We give our customers innovate. They're looking to modernize, transform new business models, right? The world we live in today. And so there's really your choice. We abstract away that complexity. So our platform gives people the ability to go build quickly. And so that's really the thought leadership. >>You know, I love, I love Andy Jassy former CEO of AWS. Now the CEO of Amazon always said, uh, Chris, you know, you know, he's no compression algorithm for experience. And he always talked about Amazon being misunderstood. And then finally people go, oh my God, that's the flow. That's the formula of success. And then they're late to the game. A lot of similarities in the Boomi culture, uh, with his law under undifferentiated, heavy lifting, you guys take away and create net new opportunities. This is an operational opportunity for customers. What's your quick comment on that? >>I love the quote again, 26 billion minutes of working with our customers directly. That's a perfect way to put it. I mean, it's not, it's a mode. There's no substitute for that. Trying to bring that to bear every day. And again would just the imperative of being agile speed, time to value. I mean, Forrester did a study recently, you know, Boomi 65% faster in terms of building integration, manual coding, and more importantly, legacy middleware. And these are now just embarrass. They're not luxuries anymore. So again, you know, when we take the, you know, we bring the bear that 26 billion minutes, everything we do from a successful or which is now double more than double from a footprint, you know, over the last 12 to 18 months. And again, trying to build more and more people into that organization, but cumulating success, successes that part of our DNA. I mean, you know, the thousand plus people across the globe, it's what we think about every single day is how to make customers successful. And again, to your point, there's, there is no substitute for the experience of, >>You know, um, we've been covering Boomi for a long time, Mandy, you know that, and, and we kind of got the picture right away. And you mentioned Chris, some of those KPIs, those are real value points that you look to, but ultimately you guys are, have been successful. And I think one of the tell signs is customer customer value always have great customers. So customer success, this is a pass term SAS term. I I-PASS term is part of the, of the cloud. You have to have customer sets built in from the beginning. You guys always had that as part of your culture, customer success, organizations and operations. What's the update, Chris with customer it's customer success. >>Yeah. Again, I mean, you know, more than doubling the team over the last 18 months in building this even more into the DNA of Boomi overall, we've completely overhauled what we think is a world-class onboarding experience for both our new customers, as well as our existing customers. John, you brought it up, you know, call it road, call it whatever. I mean, we're existing customers every day when you give them the best onboarding experience too, so that they can accelerate their journey, which kind of gets into the Boomi person, our whole community, which when we were together last time, face to face, you wouldn't be seen. We weren't just two years ago, we're now over a hundred thousand members and part of our community growing every single day, incredibly excited about that because that brings the knowledge base to all of their experience. And again, it really brings what customers really want to interact in a digital way and the Boomi versus so, so much significantly the number of knowledge base articles, the number of marketplace type vehicles that our cohorts construct talking to each other about what they're doing is so much more than comms. >>We didn't do this year, but again, you know, the Boomi versus so vibrant now it's kind of a force multiplier for more importantly, how our customers are learning from each other. Yeah. And just to tag onto that, John, over 38% of our customers are publicly. There's a movement here, industry, average of averages are high. So the platform really sells itself and customers nowadays, we're very grateful for that. >>I think you guys are a great example of product market fit and go to market fit. As people look at these metrics, you gotta nail customer success, which from day one, you gotta have the usage metrics. You've gotta have the integration. Now you've got hyper automation. And as you start getting the ecosystem, Mandy, you've got a branding opportunity here. You got, you have, uh, ecosystem, which is another tale sign of success. When you start having that word of mouth. I remember when shadow, it was kind of like poo-pooed, but that was the road behaviors became the cloud. You starting to see you guys see this ecosystem, you've kind of crossed the chasm, create opportunity for your brand. What's your reaction. >>Yeah, absolutely. And we haven't done any brand work yet. Right? That's common. So, you know, we're just getting started. >>Okay. So I have to ask what this viral thing going on. That's going to go, boom, go Boomi it. So a lot of kind of double entendres there, it and boom, you know, everyone knows that icon on their text. Boom. You know, it's good. Things are booming. What's going on? He was the update go, boom. It Boomi is, >>Yeah. So it's go movie. And this was something that our customers brought to us during the pandemic. We didn't have much opportunity. Honestly, we were all sitting behind our computer screens. So we decided that we were going to start to hold wine conversations with customers, just to check in, see how they're doing, see how we can help and get them together to share a story of how they're handling disruptions to the business. So over the course of several months, talking to customers globally, I started to hear people say, well, I told my so-and-so because it'll get done. If you have a problem, doesn't matter what it is. And all of a sudden they crystallized for me like, you know what, this is a movement. And so this wasn't something the marketing team dropped up. This is something we heard from our and have taken it to market. Now our team members talk about it or customers are talking about it. And really, again, it's a Testament to the pervasiveness and capability platform. You start with the connect, but you're able to grow with us as your business changes and opportunities advisement. >>Well, you know, that's a really good indicator of, uh, net, net, net promoter score kind of vibe when people are giving you your marketing slogans, uh, from happy customers. So a really great congratulations to the whole team there. Can you give us some specific examples since you mentioned referenceable customers of customer examples and take me through some of the highlights in your opinion, that kind of show where this is going in terms of customer use case and value. >>Yeah. And I'll start with one that's very near and dear and obviously very relevant, right? There's there's been some press on Moderna here recently. Um, you know, they were in the race to find a cure for COVID-19. They were looking to bring on new employees and they couldn't bring them on onboard these people. So they leveraged the technology to do an integrated, uh, pursuit of driving customers onto their own to their employee platforms. So we ripped, it celebrated their onboarding cut that time in half. So they could actually start working on what matters. So then undifferentiated heavy lifting around the administrative tasks associated with getting my social security number, as well as other aspects that we all have joining a company that's automated, you can get to work faster. So that really helped improve drug development time and make a real difference in terms of getting the vaccine to market. >>So that's a net that's one tangible example, second example, customer of ours, uh, drink customer with net suite. They had to find different routes to market, right? And so they went direct to consumer. So they expanded their business through a global pandemic by leveraging Boomi technology and integrating commerce with their financial systems to be able to get to customers directly and also manage their Omni channel in a, in a new way. So again, innovator die. Right? Great. When you have another customer in India, that's a government, small country, citizens had to go in in-person together for their health ID cards. Well, offices are closed. Nobody's allowed to go be in person anymore. Within one week, they digitally transform. So they can disseminate healthcare cards in a critical time in a global pandemic to their citizens and have them get healthier. So three tangible examples of how we just in the last 18 months have been able to help these customers. >>So Chris, you guys have been operating a great business. Okay. Now you're on your own. You're independent. You got some great financing partners behind you, independent company, great trajectory building on that. A lot of economies of scale, you guys have built into it. Mandy, you've got great customers. Where's the next journey for you guys, take us through the operational growth strategy, uh, for Boomi. >>Well look, I mean, obviously we're on a hiring screen or hiring than we've ever done, and that's pervasive across the entire business, real focus on product engineering, who our go-to market, but we're also, you know, when you heard Chris, we're really redefining I-PASS. I mean, when I think about what I'm most excited about, it's a few things a we're violently aligned from kind of call it the chairman of the board to the newest team member. You know, we know what the opportunity is. We're all aligned, but as importantly, it's what we're doing from a product perspective. You know, when you've heard about intelligent connectivity, you've heard about automating connectivity, what we're doing from a discover perspective, EDA, everything we're doing in the marketplace, really accelerating what the adoption opportunities are for movie across the whole Boomi verse and across all of those new customers that we're acquiring and then ultimately seeing what they do. I mean, again, I, I, I love what Mandy says. I mean, it literally always, I feel so strongly about this within every single company in the world. I mean, it literally should be, no movie is because the opportunities are expansive and endless in terms of what we can do together. And that's what I'm excited about is really kind of unleashing this company on the world, see what we can do next. It was, we really think about this next iteration, >>Mandy, real quick to you, uh, when people say go Boomi it, when your customers say that, what does it mean to them? Why are they saying it? Take us through some of the psychology and some of the implications of, and the meaning of the word go Boomi from a customer perspective. >>Yeah. Great question. I think it's, first of all, it's a Testament of the trust, right? It's just going to work, right? So go get it done. It'll be fast. It'll be easy. It is not complex at all. Drag and drop visual interface. Just go make it happen and go move on to the next data is critical, right? It's the lifeblood of any organization or that backbone of connectivity that gives our customers confidence to go to work. >>Awesome stuff. Chris, final word for you. If you can just share in your opinion and be talking to your customers out there and future customers, what would you say to them as you guys go this next leg of the journey for Boomi? What would you say to them? >>Yeah, I would say come partner with us. Come on, understand what we can do for your business. Come understand what true intelligent, automated connects in Lightspeed in terms of how fast we can do that with you. And let's go explore the art of the possible because to me, that's >>Awesome, Chris. Great to see you, Amanda and great to see you virtually. Can't wait to see you in person and next event, uh, and congratulations on all the success and looking forward to covering the next leg of the journey of Boomi. Thanks for coming on. Okay. This is the cube coverage of Boomi's out of this world event. I'm John furrier hosted the cube. Thanks for watching.
SUMMARY :
Thanks for coming on the cube. and the success you guys had obviously now on a new trajectory go to the next level. So it's just an amazing time to see what our customers You know, Chris, uh, Dave Volante, we're talking about, you know, how do you tell the next big breakout So our platform gives people the ability to go uh, Chris, you know, you know, he's no compression algorithm for experience. So again, you know, when we take the, you know, we bring the bear that 26 billion minutes, And you mentioned Chris, some of those KPIs, those are real value points face to face, you wouldn't be seen. We didn't do this year, but again, you know, the Boomi versus so vibrant now it's You starting to see you guys see this ecosystem, you know, we're just getting started. So a lot of kind of double entendres there, it and boom, you know, And really, again, it's a Testament to the pervasiveness and capability platform. So a really great congratulations to the whole team there. that we all have joining a company that's automated, you can get to work faster. When you have another customer in India, that's a government, So Chris, you guys have been operating a great business. aligned from kind of call it the chairman of the board to the newest team member. Mandy, real quick to you, uh, when people say go Boomi it, when your customers say that, It's the lifeblood of any organization or that backbone of connectivity that gives our and future customers, what would you say to them as you guys go this next leg of the journey for And let's go explore the art of the possible because to me, Can't wait to see you in person and next event,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amanda | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
Mandy Dhaliwal | PERSON | 0.99+ |
Chris Mandy | PERSON | 0.99+ |
COVID-19 | OTHER | 0.99+ |
65% | QUANTITY | 0.99+ |
second example | QUANTITY | 0.99+ |
800 partners | QUANTITY | 0.99+ |
26 billion minutes | QUANTITY | 0.99+ |
John furrier | PERSON | 0.99+ |
more than 18,000 customers | QUANTITY | 0.99+ |
Mandy | PERSON | 0.99+ |
Forrester | ORGANIZATION | 0.99+ |
Chris Port | PERSON | 0.99+ |
Boomi | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
Chris port | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
Moderna | ORGANIZATION | 0.97+ |
over a hundred thousand members | QUANTITY | 0.97+ |
Boomi | PERSON | 0.97+ |
over 38% | QUANTITY | 0.96+ |
one week | QUANTITY | 0.96+ |
two great guests | QUANTITY | 0.96+ |
pandemic | EVENT | 0.96+ |
thousand plus people | QUANTITY | 0.95+ |
18 months | QUANTITY | 0.92+ |
this year | DATE | 0.91+ |
12 | QUANTITY | 0.9+ |
single day | QUANTITY | 0.88+ |
today | DATE | 0.87+ |
one tangible example | QUANTITY | 0.86+ |
about 189,000 | QUANTITY | 0.86+ |
last 18 months | DATE | 0.83+ |
one | QUANTITY | 0.82+ |
last 18 months | DATE | 0.78+ |
half | QUANTITY | 0.77+ |
The Future of Connectivity | ORGANIZATION | 0.77+ |
three tangible examples | QUANTITY | 0.75+ |
Hyperautomation | ORGANIZATION | 0.75+ |
next wave | DATE | 0.74+ |
every single day | QUANTITY | 0.73+ |
single company | QUANTITY | 0.72+ |
first | QUANTITY | 0.65+ |
double more than double | QUANTITY | 0.65+ |
Boomi | TITLE | 0.65+ |
SAS | ORGANIZATION | 0.64+ |
Breaking Analysis The Future of the Semiconductor Industry
from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante semiconductors are the heart of technology innovation for decades technology improvements have marched the cadence of silicon advancements in performance cost power and packaging in the past 10 years the dynamics of the semiconductor industry have changed dramatically soaring factory costs device volume explosions fabulous chip companies greater programmability compressed time to tape out a lot more software content the looming presence of china these and other factors have changed the power structure of the semiconductor business chips today power every aspect of our lives and have led to a global semiconductor shortage that's been well covered but we've never seen anything like it before we believe silicon's success in the next 20 years will be determined by volume manufacturing capabilities design innovation public policy geopolitical dynamics visionary leadership and innovative business models that can survive the intense competition in one of the most challenging businesses in the world hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis it's our pleasure to welcome daniel newman in one of the leading analysts in the technology business and founder of futurum research daniel welcome to the program thanks so much dave great to see you thanks for having me big topic yeah i'll say i'm really looking forward to this and so here's some of the topics that we want to cover today if we have time changes in the semiconductor industry i've said they've been dramatic the shift to nofap companies we're going to talk about volume manufacturing those shifts that have occurred largely due to the arm model we want to cover intel and dig into that and what it has to do to to survive and thrive these changes and then we want to take a look at how alternative processors are impacting the world people talk about is moore's law dead is it alive and well daniel you have strong perspectives on all of this including nvidia love to get your thoughts on on that plus talk about the looming china threat as i mentioned in in the intro but daniel before we get into it do these topics they sound okay how do you see the state of the semiconductor industry today where have we come from where are we and where are we going at the macro level there are a lot of different narratives that are streaming alongside and they're not running in parallel so much as they're running and converging towards one another but it gradually different uh you know degrees so the last two years has welcomed a semiconductor conversation that we really hadn't had and that was supply chain driven the covid19 pandemic brought pretty much unprecedented desire demand thirst or products that are powered by semiconductors and it wasn't until we started running out of laptops of vehicles of servers that the whole world kind of put the semiconductor in focus again like it was just one of those things dave that we as a society it's sort of taken for granted like if you need a laptop you go buy a laptop if you needed a vehicle there'd always be one on the lot um but as we've seen kind of this exponentialism that's taken place throughout the pandemic what we ended up realizing is that semiconductors are eating the world and in fact the next industrial the entire industrial itself the complex is powered by semiconductor technology so everything we we do and we want to do right you went from a vehicle that might have had 50 or 100 worth of semiconductors on a few different parts to one that might have 700 800 different chips in it thousands of dollars worth of semi of semiconductors so you know across the board though yes you're dealing with the dynamics of the shortage you're dealing with the dynamics of innovation you're dealing with moore's law and sort of coming to the end which is leading to new process we're dealing with the foundry versus fab versus invention and product development uh situation so there's so many different concurrent semiconductor narratives that are going on dave and we can talk about any of them and all of them and i'm sure as we do we'll overlap all these different themes you know maybe you can solve this mystery for me there's this this this chip shortage and you can't invent vehicle inventory is so tight but yet when you listen to uh the the ads if the the auto manufacturers are pounding the advertising maybe they're afraid of tesla they don't want to lose their brand awareness but anyway so listen it's by the way a background i want to get a little bit academic here but but bear with me i want to introduce actually reintroduce the concept of wright's law to our audience we know we all know about moore's law but the earlier instantiation actually comes from theodore wright t.p wright he was this engineer in the airplane industry and the math is a little bit abstract to apply but roughly translated says as the cumulative number of units produced doubles your cost per unit declines by a fixed percentage now in airplanes that was around 15 percent in semiconductors we think that numbers more like 20 25 when you add the performance improvements you get from silicon advancements it translates into something like 33 percent cost cost declines when you can double your cumulative volume so that's very important because it confers strategic advantage to the company with the largest volume so it's a learning curve dynamic and it's like andy jassy says daniel there's no compression algorithm for experience and it definitely applies here so if you apply wright's law to what's happening in the industry today we think we can get a better understanding of for instance why tsmc is dominating and why intel is struggling any quick thoughts on that well you have to take every formula like that in any sort of standard mathematics and kind of throw it out the window when you're dealing with the economic situation we are right now i'm not i'm not actually throwing it out the window but what i'm saying is that when supply and demand get out of whack some of those laws become a little bit um more difficult to sustain over the long term what i will say about that is we have certainly seen this found um this fabulous model explode over the last few years you're seeing companies that can focus on software frameworks and innovation that aren't necessarily getting caught up in dealing with the large capital expenditures and overhead the ability to as you suggested in the topics here partner with a company like arm that's developing innovation and then and then um you know offering it uh to everybody right and for a licensee and then they can quickly build we're seeing what that's doing with companies like aws that are saying we're going to just build it alibaba we're just going to build it these aren't chip makers these aren't companies that were even considered chip makers they are now today competing as chip makers so there's a lot of different dynamics going back to your comment about wright's law like i said as we normalize and we figure out this situation on a global scale um i do believe that the who can manufacture the most will certainly continue to have significant competitive advantages yeah no so that's a really interesting point that you're bringing up because one of the things that it leads me to think is that the chip shortage could actually benefit intel i think will benefit intel so i want to introduce this some other data and then get your thoughts on this very simply the chart on the left shows pc shipments which peaked in in 2011 and then began at steady decline until covid and they've the pcs as we know have popped up in terms of volume in the past year and looks like they'll be up again this year the chart on the right is cumulative arm shipments and so as we've reported we think arm wafer volumes are 10x those of x86 volumes and and as such the arm ecosystem has far better cost structure than intel and that's why pat gelsinger was called in to sort of save the day so so daniel i just kind of again opened up this this can of worms but i think you're saying long term volume is going to be critical that's going to confer low cost advantages but in the in in the near to mid-term intel could actually benefit from uh from this chip shortage well intel is the opportunity to position itself as a leader in solving the repatriation crisis uh this will kind of carry over when we talk more about china and taiwan and that relationship and what's going on there we've really identified a massive gap in our uh in america supply chain in the global supply chain because we went from i don't have the stat off hand but i have a rough number dave and we can validate this later but i think it was in like the 30-ish high 30ish percentile of manufacturing of chips were done here in the united states around 1990 and now we're sub 10 as of 2020. so we we offshored almost all of our production and so when we hit this crisis and we needed more manufacturing volume we didn't have it ready part of the problem is you get people like elon musk that come out and make comments to the media like oh it'll be fixed later this year well you can't build a fab in a year you can't build a fab and start producing volume and the other problem is not all chips are the same so not every fab can produce every chip and when you do have fabs that are capable of producing multiple chips it costs millions of dollars to change the hardware and to actually change the process so it's not like oh we're going to build 28 today because that's what ford needs to get all those f-150s out of the lot and tomorrow we're going to pump out more sevens for you know a bunch of hp pcs it's a major overhaul every time you want to retool so there's a lot of complexity here but intel is the one domestic company us-based that has basically raised its hand and said we're going to put major dollars into this and by the way dave the arm chart you showed me could have a very big implication as to why intel wants to do that yeah so right because that's that's a big part of of foundry right is is get those volumes up so i want to hold that thought because i just want to introduce one more data point because one of the things we often talk about is the way in which alternative processors have exploded onto the scene and this chart here if you could bring that up patrick thank you shows the way in which i think you're pointing out intel is responding uh by leveraging alternative fat but once again you know kind of getting getting serious about manufacturing chips what the chart shows is the performance curve it's on a log scale for in the blue line is x86 and the orange line is apple's a series and we're using that as a proxy for sort of the curve that arm is on and it's in its performance over time culminating in the a15 and it measures trillions of operations per second so if you take the traditional x86 curve of doubling every 18 to 24 months that comes out roughly to about 40 percent improvement per year in performance and that's diminishing as we all know to around 30 percent a year because the moore's law is waning the orange line is powered by arm and it's growing at over a hundred percent really 110 per year when you do the math and that's when you combine the cpu the the the neural processing unit the the the xpu the dsps the accelerators et cetera so we're seeing apple use arm aws to you to your point is building chips on on graviton and and and tesla's using our list is long and this is one reason why so daniel this curve is it feels like it's the new performance curve in the industry yeah we are certainly in an era where companies are able to take control of the innovation curve using the development using the open ecosystem of arm having more direct control and price control and of course part of that massive arm number has to do with you know mobile devices and iot and devices that have huge scale but at the same time a lot of companies have made the decision either to move some portion of their product development on arm or to move entirely on arm part of why it was so attractive to nvidia part of the reason that it's under so much scrutiny that that deal um whether that deal will end up getting completed dave but we are seeing an era where we want we i said lust for power i talked about lust for semiconductors our lust for our technology to do more uh whether that's software-defined vehicles whether that's the smartphones we keep in our pocket or the desktop computer we use we want these machines to be as powerful and fast and responsive and scalable as possible if you can get 100 where you can get 30 improvement with each year and generation what is the consumer going to want so i think companies are as normal following the demand of consumers and what's available and at the same time there's some economic benefits they're they're able to realize as well i i don't want to i don't want to go too deep into nvidia arm but what do you handicap that that the chances that that acquisition actually happens oh boy um right now there's a lot of reasons it should happen but there are some reasons that it shouldn't i still kind of consider it a coin toss at this point because fundamentally speaking um you know it should create more competition but there are some people out there that believe it could cause less and so i think this is going to be hung up with regulators a little bit longer than we thought we've already sort of had some previews into that dave with the extensions and some of the timelines that have already been given um i know that was a safe answer and i will take credit for being safe this one's going to be a hard one to call but it certainly makes nvidia an amazing uh it gives amazing prospects to nvidia if they're able to get this deal done yeah i i agree with you i think it's 50 50. okay my i want to pose the question is intel too strategic to fail in march of this year we published this article where we posed that question uh you and i both know pat pretty well we talked about at the time the multi-front war intel is waging in a war with amd the arm ecosystem tsmc the design firms china and we looked at the company's moves which seemed to be right from a strategy standpoint the looking at the potential impact of the u.s government intel's partnership with ibm and what that might portend the us government has a huge incentive to make sure intel wins with onshore manufacturing and that looming threat from china but daniel is intel too strategic to fail and is pat gelsinger making the right moves well first of all i do believe at this current juncture where the semiconductor and supply chain shortage and crisis still looms that intel is too strategic to fail i also believe that intel's demise is somewhat overstated not to say intel doesn't have a slate of challenges that it's going to need to address long term just with the technology adoption curve that you showed being one of them dave but you have to remember the company still has nearly 90 of the server cpu market it still has a significant market share in client and pc it is seeing market share erosion but it's not happened nearly as fast as some people had suggested it would happen with right now with the demand in place and as high as it is intel is selling chips just about as quickly as it can make them and so we right now are sort of seeing the tam as a whole the demand as a whole continue to expand and so intel is fulfilling that need but where are they really too strategic to fail i mean we've seen in certain markets in certain uh process in um you know client for instance where amd has gained of course that's still x86 we've seen uh where the m1 was kind of initially thought to be potentially a pro product that would take some time it didn't take nearly as long for them to get that product in good shape um but the foundry and fab side is where i think intel really has a chance to flourish right now one it can play in the arm space it can build these facilities to be able to produce and help support the production of volumes of chips using arm designs so that actually gives intel and inroads two is it's the company that has made the most outspoken commitment to invest in the manufacturing needs of the united states both here in the united states and in other places across the world where we have friendly ally relationships and need more production capabilities if not in intel b and there is no other logical company that's us-based that's going to meet the regulator and policymakers requirements right now that is also raising their hand and saying we have the know-how we've been doing this we can do more of this and so i think pat is leaning into the right area and i think what will happen is very likely intel will support manufacturing of chips by companies like qualcomm companies like nvidia and if they're able to do that some of the market share losses that they're potentially facing with innovation challenges um and engineering challenges could be offset with growth in their fab and foundry businesses and i think i think pat identified it i think he's going to market with it and you know convincing the street that's going to be a whole nother thing that this is exciting um but i think as the street sees the opportunity here this is an area that intel can really lean into so i think i i think people generally would recognize at least the folks i talk to and it'll be interested in your thoughts who really know this business that intel you know had the best manufacturing process in in the world obviously that's coming to question but but but but for instance people say well intel's 10 nanometer you know is comparable to tsm seven nanometer and that's sort of overstated their their nanometer you know loss but but so so they they were able to point as they were able to sort of hide some of the issues maybe in design with great process and and i i believe that comes down to volume so the question i have then is and i think so i think patrick's pat is doing the right thing because he's going after volume and that's what foundry brings but can he get enough volume or does he need for inst for instance i mean one of the theories i've put out there is that apple could could save the day for intel if the if the us government gets apple in a headlock and says hey we'll back off on break up big tech but you got to give pat some of your foundry volume that puts him on a steeper learning curve do you do you worry sometimes though daniel that intel just even with like qualcomm and broadcom who by the way are competitors of theirs and don't necessarily love them but even even so if they could get that those wins that they still won't have the volume to compete on a cost basis or do you feel like even if they're numbered a number three even behind samsung it's good enough what are your thoughts on that well i don't believe a company like intel goes into a business full steam and they're not new to this business but the obvious volume and expansion that they're looking at with the intention of being number two or three these great companies and you know that's same thing i always say with google cloud google's not out to be the third cloud they're out to be one well that's intel will want to to be stronger if the us government and these investments that it's looking at making this 50 plus billion dollars is looking to pour into this particular space which i don't think is actually enough but if if the government makes these commitments and intel being likely one of the recipients of at least some of these dollars to help expedite this process move forward with building these facilities to make increased manufacturing very likely there's going to be some precedent of law a policy that is going to be put in place to make sure that a certain amount of the volume is done here stateside with companies this is a strategic imperative this is a government strategic imperative this is a putting the country at risk of losing its technology leadership if we cannot manufacture and control this process of innovation so i think intel is going to have that as a benefit that the government is going to most likely require some of this manufacturing to take place here um especially if this investment is made the last thing they're going to want to do is build a bunch of foundries and build a bunch of fabs and end up having them not at capacity especially when the world has seen how much of the manufacturing is now being done in taiwan so i think we're concluding and i i i correctly if i'm wrong but intel is too strategic to fail and and i i sometimes worry they can go bankrupt you know trying to compete with the likes of tsmc and that's why the the the public policy and the in the in the partnership with the u.s government and the eu is i think so important yeah i don't think bankruptcy is an immediate issue i think um but while i follow your train of thought dave i think what you're really looking at more is can the company grow and continue to get support where i worry about is shareholders getting exhausted with intel's the merry-go-round of not growing fast enough not gaining market share not being clearly identified as a leader in any particular process or technology and sort of just playing the role of the incumbent and they the company needs to whether it's in ai whether it's at the edge whether it's in the communications and service provider space intel is doing well you look at their quarterly numbers they're making money but if you had to say where are they leading right now what what which thing is intel really winning uh consistently at you know you look at like ai and ml and people will point to nvidia you look at you know innovation for um client you know and even amd has been super disruptive and difficult for intel uh of course you we've already talked about in like mobile um how impactful arm has been and arm is also playing a pretty big role in servers so like i said the market share and the technology leadership are a little out of skew right now and i think that's where pat's really working hard is identifying the opportunities for for intel to play market leader and technology leader again and for the market to clearly say yes um fab and foundry you know could this be an area where intel becomes the clear leader domestically and i think that the answer is definitely yes because none of the big chipmakers in the us are are doing fabrication you know they're they're all outsourcing it to overseas so if intel can really lead that here grow that large here then it takes some of the pressure off of the process and the innovation side and that's not to say that intel won't have to keep moving there but it does augment the revenue creates a new profit center and makes the company even more strategic here domestically yeah and global foundry tapped out of of sub 10 nanometer and that's why ibm's pseudonym hey wait a minute you had a commitment there the concern i have and this is where again your point is i think really important with the chip shortage you know to go from you know initial design to tape out took tesla and apple you know sub sub 24 months you know probably 18 months with intel we're on a three-year design to tape out cycle maybe even four years so they've got to compress that but that as you well know that's a really hard thing to do but the chip shortage is buying them time and i think that's a really important point that you brought out early in this segment so but the other big question daniel i want to test with you is well you mentioned this about seeing arm in the enterprise not a lot of people talk about that or have visibility on that but i think you're right on so will arm and nvidia be able to seriously penetrate the enterprise the server business in particular clearly jensen wants to be there now this data from etr lays out many of the enterprise players and we've superimposed the semiconductor giants in logos the data is an xy chart it shows net score that's etr's measure of spending momentum on the vertical axis and market share on the horizontal axis market share is not like idc market share its presence in the data set and as we reported before aws is leading the charge in enterprise architecture as daniel mentioned they're they're designing their own chips nitro and graviton microsoft is following suit as is google vmware has project monterey cisco is on the chart dell hp ibm with red hat are also shown and we've superimposed intel nvidia china and arm and now we can debate the position of the logos but we know that one intel has a dominant position in the data center it's got to protect that business it cannot lose ground as it has in pcs because the margin pressure it would face two we know aws with its annapurna acquisition is trying to control its own destiny three we know vmware has project monterey and is following aws's lead to support these new workloads beyond x86 general purpose they got partnerships with pansando and arm and others and four we know cisco they've got chip design chops as does hpe maybe to a lesser extent and of course we know ibm has excellent semiconductor design expertise especially when it comes to things like memory disaggregation as i said jensen's going hard after the data center you know him well daniel we know china wants to control its own destiny and then there's arm it dominates mobile as you pointed out in iot can it make a play for the data center daniel how do you see this picture and what are your thoughts on the future of enterprise in the context of semiconductor competition it's going to take some time i believe but some of the investments and products that have been brought to market and you mentioned that shorter tape out period that shorter period for innovation whether it's you know the graviton uh you know on aws or the aiml chips that uh with trainium and inferentia how quickly aws was able to you know develop build deploy to market an arm-based solution that is being well received and becoming an increasing component of the services and and uh products that are being offered from aws at this point it's still pretty small and i would i would suggest that nvidia and arm in the spirit of trying to get this deal done probably don't necess don't want the enterprise opportunity to be overly inflated as to how quickly the company's going to be able to play in that space because that would somewhat maybe slow or bring up some caution flags that of the regulators that are that are monitoring this at the same time you could argue that arm offering additional options in competition much like it's doing in client will offer new form factors new designs um new uh you know new skus the oems will be able to create more customized uh hardware offerings that might be able to be unique for certain enterprises industries can put more focus you know we're seeing the disaggregation with dpus and how that technology using arm with what aws is doing with nitro but what what these different companies are doing to use you know semiconductor technology to split out security networking and storage and so you start to see design innovation could become very interesting on the foundation of arm so in time i certainly see momentum right now the thing is is most companies in the enterprise are looking for something that's fairly well baked off the shelf that can meet their needs whether it's sap or whether it's you know running different custom applications that the business is built on top of commerce solutions and so intel meets most of those needs and so arm has made a lot of sense for instance with these cloud scale providers but not necessarily as much sense for enterprises especially those that don't want to necessarily look at refactoring all the workloads but as software becomes simpler as refactoring becomes easier to do between different uh different technologies and processes you start to say well arm could be compelling and you know because the the bottom line is we know this from mobile devices is most of us don't care what the processor is the average person the average data you know they look at many of these companies the same in enterprise it's always mattered um kind of like in the pc world it used to really matter that's where intel inside was born but as we continue to grow up and you see these different processes these different companies nvidia amd intel all seen as very worthy companies with very capable technologies in the data center if they can offer economics if they can offer performance if they can offer faster time to value people will look at them so i'd say in time dave the answer is arm will certainly become more and more competitive in the data center like it was able to do at the edge in immobile yeah one of the things that we've talked about is that you know the software-defined data center is awesome but it also created a lot of wasted overhead in terms of offloading storage and and networking security and that much of that is being done with general purpose x86 processors which are more expensive than than for instance using um if you look at what as you mentioned great summary of what aws is doing with graviton and trainium and other other tooling what ampere is doing um in in in oracle and you're seeing both of those companies for example particularly aws get isvs to write so they can run general purpose applications on um on arm-based processors as well it sets up well for ai inferencing at the edge which we know arms dominating the edge we see all these new types of workloads coming into the data center if you look at what companies like nebulon and pensando and and others are doing uh you're seeing a lot of their offloads are going to arm they're putting arm in even though they're still using x86 in a lot of cases but but but they're offloading to arm so it seems like they're coming into the back door i understand your point actually about they don't want to overplay their hand there especially during these negotiations but we think that that long term you know it bears watching but intel they have such a strong presence they got a super strong ecosystem and they really have great relationships with a lot of the the enterprise players and they have influence over them so they're going to use that the the the chip shortage benefits them the uh the relationship with the us government pat is spending a lot of time you know working that so it's really going to be interesting to see how this plays out daniel i want to give you the last word your final thoughts on what we talked about today and where you see this all headed i think the world benefits as a whole with more competition and more innovation pressure i like to see more players coming into the fray i think we've seen intel react over the last year under pat gelsinger's leadership we've seen the technology innovation the angstrom era the 20a we're starting to see what that roadmap is going to look like we've certainly seen how companies like nvidia can disrupt come into market and not just using hardware but using software to play a major role but as a whole as innovation continues to take form at scale we all benefit it means more intelligent software-defined vehicles it puts phones in our hands that are more powerful it gives power to you know cities governments and enterprises that can build applications and tools that give us social networks and give us data-driven experiences so i'm very bullish and optimistic on as a whole i said this before i say it again i believe semiconductors will eat the world and then you know you look at the we didn't even really talk about the companies um you know whether it's in ai uh like you know grok or grav core there are some very cool companies building things you've got qualcomm bought nuvia another company that could you know come out of the blue and offer us new innovations in mobile and personal computing i mean there's so many cool companies dave with the scale of data the uh the the growth and demand and desire for connectivity in the world um it's never been a more interesting time to be a fan of technology the only thing i will say as a whole as a society as i hope we can fix this problem because it does create risks the supply chain inflation the economics all that stuff ties together and a lot of people don't see that but if we can't get this manufacturing issue under control we didn't really talk about china dave and i'll just say taiwan and china are very physically close together and the way that china sees taiwan and the way we see taiwan is completely different we have very little control over what can happen we've all seen what's happened with hong kong so there's just so many as i said when i started this conversation we've got all these trains on the track they're all moving but they're not in parallel these tracks are all converging but the convergence isn't perpendicular so sometimes we don't see how all these things interrelate but as a whole it's a very exciting time love being in technology and uh love having the chance to come out here and talk with you i love the optimism and you're right uh that competition in china that's going to come from china as well xi has made it a part of his legacy i think to you know re-incorporate taiwan that's going to be interesting to see i mean taiwan ebbs and flows with regard to you know its leadership sometimes they're more pro i guess i should say less anti-china maybe that's the better way to say it uh and and and you know china's putting in big fab capacity for nand you know maybe maybe people look at that you know some of that is the low end of the market but you know clay christensen would say well to go take a look at the steel industry and see what happened there so so we didn't talk much about china and that was my oversight but but they're after self-sufficiency it's not like they haven't tried before kind of like intel has tried foundry before but i think they're really going for it this time but but now what are your do you believe that china will be able to get self-sufficiency let's say within the next 10 to 15 years with semiconductors yes i would never count china out of anything if they put their mind to it if it's something that they want to put absolute focus on i think um right now china vacillates between wanting to be a good player and a good steward to the world and wanting to completely run its own show the the politicization of what's going on over there we all saw what happened in the real estate market this past week we saw what happened with tech ed over the last few months we've seen what's happened with uh innovation and entrepreneurship it is not entirely clear if china wants to give the more capitalistic and innovation ecosystem a full try but it is certainly shown that it wants to be seen as a world leader over the last few decades it's accomplished that in almost any area that it wants to compete dave i would say if this is one of gigi ping's primary focuses wanting to do this it would be very irresponsible to rule it out as a possibility daniel i gotta tell you i i love collaborating with you um we met face to face just recently and i hope we could do this again i'd love to have you you back on on the program thanks so much for your your time and insights today thanks for having me dave so daniel's website futuram research that's three use in futurum uh check that out for termresearch.com uh the the this individual is really plugged in he's forward thinking and and a great resource at daniel newman uv is his twitter so go follow him for some great stuff and remember these episodes are all available as podcasts wherever you listen all you do is search for breaking analysis podcast we publish each week on wikibon.com and siliconangle.com and by the way daniel thank you for contributing your your quotes to siliconangle the writers there love you uh you can always connect on twitter i'm at divalanto you can email me at david.velante at siliconangle.com appreciate the comments on linkedin and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time you
SUMMARY :
benefit that the government is going to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
50 | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
patrick | PERSON | 0.99+ |
three-year | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
33 percent | QUANTITY | 0.99+ |
nvidia | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
daniel | PERSON | 0.99+ |
taiwan | LOCATION | 0.99+ |
700 | QUANTITY | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
alibaba | ORGANIZATION | 0.99+ |
boston | LOCATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
samsung | ORGANIZATION | 0.99+ |
daniel newman | PERSON | 0.99+ |
thousands of dollars | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
america | LOCATION | 0.99+ |
dave vellante | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
one reason | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
each week | QUANTITY | 0.99+ |
amd | ORGANIZATION | 0.98+ |
aws | ORGANIZATION | 0.98+ |
dave | PERSON | 0.98+ |
10 nanometer | QUANTITY | 0.98+ |
ibm | ORGANIZATION | 0.98+ |
intel | ORGANIZATION | 0.98+ |
pansando | ORGANIZATION | 0.98+ |
palo alto | ORGANIZATION | 0.98+ |
each year | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
u.s government | ORGANIZATION | 0.98+ |
united states | LOCATION | 0.98+ |
china | LOCATION | 0.98+ |
24 months | QUANTITY | 0.97+ |
andy jassy | PERSON | 0.97+ |
this year | DATE | 0.97+ |
50 plus billion dollars | QUANTITY | 0.97+ |
f-150s | COMMERCIAL_ITEM | 0.97+ |
last year | DATE | 0.97+ |
march of this year | DATE | 0.97+ |
termresearch.com | OTHER | 0.97+ |
around 15 percent | QUANTITY | 0.96+ |
vmware | ORGANIZATION | 0.96+ |
The Future of the Semiconductor Industry | TITLE | 0.96+ |
cisco | ORGANIZATION | 0.96+ |
nuvia | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
broadcom | ORGANIZATION | 0.96+ |
clay christensen | PERSON | 0.96+ |
tesla | PERSON | 0.96+ |
china | ORGANIZATION | 0.95+ |
around 30 percent a year | QUANTITY | 0.95+ |
Vikas Ratna and James Leach | Cisco Future Cloud 2021
>> From around the globe it's theCube. Presenting Future Cloud. One event, a world of opportunities. Brought to you by Cisco. >> We're here with Vikas Ratna, who's the director of product management for ECS at Cisco and James Leach is the director of business development for UCS at Cisco as well. We're going to talk about computing in the age of hybrid cloud. Welcome gentlemen, great to see you. >> Thank you. >> Thank you. >> Vikas let's start with you and talk a little bit about computing architectures. We know that they're evolving, they're supporting new data intensive and other workloads, especially as high-performance workload requirements, what's Cisco's point of view on all this? And specifically, I'm interested in your thoughts on fabrics, I mean, it's kind of your wheelhouse, you've got accelerators, What are the workloads that are driving these evolving technologies and how is it impacting customers? What are you seeing? >> Sure, Dave. First of all, very excited to be here today. You're absolutely right. The pace of innovation and foundational platform ingredients have just been phenomenal in recent years. The fabric, accelerators, the drives, the processing power, the core density all have been evolving at just an amazing pace and the pace will only pick up further, but ultimately it is all about applications and the way applications leverage those innovations. And we do see applications evolving quite rapidly. The new classes of applications are evolving to absorb those innovations and deliver much better business values, very, very exciting times, Dave, but talking about the impact on the customers, well these innovations have helped them pretty positively. We do see significant challenges in the data center with a point product based approach of delivering these platform innovations to the applications. What has happened is these innovations today are being packaged as point products to meet the needs of a specific application. And as you know, the different applications have their different needs. Some applications need more tributes, others need more memory, yet others need, you know, more cores. Some need different kinds of fabrics. As a result, if you walk into a data center today, it is pretty common to see many different point products in the data center. This creates a manageability challenge. Imagine the aspect of managing, you know, several different form factors, one you, to you, purpose-built servers or the variety of, you know, blade form factor. You know, this reminds me of the situation we had before smartphones arrived. You remember the days when you, when we used to have a GPS device for navigation system, a cool music device for listening to the music, a phone device for making a call, camera for taking the photos. Right? And we were all excited about it. It's when the smartphones arrived that we realized all those cool innovations could be delivered in a much simpler, much convenient, and easy to consume it through one device and, you know, and that could completely transform our experience. So we see the customers who are benefiting from these innovations to have a way to consume those things in a much more simplistic way than they are able to do it today. >> And I like, look, it's always been about the applications, but to your point, the applications are now moving at a much faster pace. The customer experience is, expectation, is way escalated. And when you combine all these, I love your analogy there Vikas, because when you combine all these capabilities, it allows us to develop new applications, new capabilities, new customer experiences. So that's the, I always say, the next 10 years, they ain't going to be like the last. And James, public cloud obviously is heavily influencing compute design and customer operating models. You know, it's funny, when the public cloud first hit the market, everyone, we were swooning about oh, low cost, standard off-the-shelf servers, you know, and storage devices, but it quickly became obvious that customers needed more. So I wonder if you could comment on this. How are the trends that we've seen from the hyperscalers, how are they filtering into on-prem infrastructure and maybe, you know, maybe there's some differences there as well that you could address? >> Absolutely. So, you know, I'd say first of all, quite frankly, you know, public cloud has completely changed the expectations of how our customers want to consume compute, right? So customers, especially in a public cloud environment, they've gotten used to, or, you know, come to accept that they should consume from the application out, right? They want a very application-focused view, a services-focused view of the world. They don't want to think about infrastructure, right? They want to think about their application. They want to move outward, right? So, this means that the infrastructure basically has to meet the application where it lives. So what that means for us is that, you know, we're taking a different approach. We've decided that, you know, we're not going to chase this, you know, single pane of glass view of the world, which, you know, frankly our customers don't want. They don't want a single pane of glass. What they want is a single operating model. They want an operating model that's similar to what they can get with the public cloud, but they want it across all of their cloud options. They want it across private cloud, across hybrid cloud options, as well. So what that means is they don't want to just consume infrastructure services. They want all of their cloud services from this operating model. So that means that they may want to consume infrastructure services for automation orchestration, but they also need Kubernetes services. They also need virtualization services. They may need Terraform, workload optimization. All of these services have to be available from within the operating model, a consistent operating model, right? So it doesn't matter whether you're talking about private cloud, hybrid cloud, anywhere, where the application lives doesn't matter. What matters is that we have a consistent model, that we think about it from the application out, and frankly, I'd say, you know, this has been the stumbling block for private cloud. Private cloud is hard, right? This is why it hasn't been really solved yet. This is why we had to take a brand new approach. And frankly, it's why we're super excited about X Series and intersight as that, you know, operating model that fits the hybrid cloud better than anything else we've seen. >> This is a Cube first, first time's a technology vendor has ever said that it's not about a single pane of glass because I've been hearing for decades we're going to deliver a single pane of glass. It's going to be seamless and it never happens. It's like a single version of the truth. It's aspirational. And it's just not reality. So can we stay on the X Series for a minute, James, maybe in this context, but in the launch that we saw today, it was like a fire hose of announcements. So, how does the X Series fit into the strategy with intersight, and hybrid cloud in this operating model that you're talking about? >> Right. So, I think it goes hand-in-hand, right? The two pieces go together very well. So we have, you know, this idea of a single operating model that is definitely, you know, something that our customers demand, right? It's what we have to have, but at the same time we need to solve the problems Vikas was talking about before, we need a single infrastructure to go along with that single operating model. So no longer do we need to have silos within the infrastructure that give us different operating models or different sets of benefits, when you want infrastructure that can kind of do all of those configurations, all those applications. And then, you know, the operating model is very important because that's where we abstract the complexity that could come with just throwing all that technology at the infrastructure. So that, you know, this is, you know, the way that we think about it is the data center is not centered, right? It's no longer centered. Applications live everywhere. Infrastructure lives everywhere. And, you know, we need to have that consistent operating model, but we need to do things within the infrastructure as well to take full advantage, right? So we want all the SaaS benefits of a CICD model of, you know, the intersight can bring, we want all of that, you know, proactive recommendation engine with the power of AI behind it, we want the connected support experience. We want all of that, but we want to do it across a single infrastructure. And we think that that's how they tie together. That's why one or the other doesn't really solve the problem, but both together. That's why we're here. That's why we're super excited. >> So Vikas, I make you laugh a little bit. When I was an analyst at IDC, I was a bit deep into infrastructure, And then when I left, I was doing, I was working with application development heads. And like you said, infrastructure, it was just a roadblock, but it was so the tongue-in-cheek is when Cisco announced UCS a decade ago, I totally missed it. I didn't understand it. I thought it was Cisco getting into the traditional server business. And it wasn't until I dug in that I realized that your vision was really to transform infrastructure deployment and management. And change the model. It was like, okay, I got that wrong. But, so let's talk about the, the ecosystem and the joint development efforts that are going on there. X Series, how does it fit into this converged infrastructure business that you've built and grown with partners? You've got storage partners like NetApp and Pure. You got ISV partners in the ecosystem. We see Cohesity, it's been a while since we hung out with all these companies at the Cisco live, hopefully next year, but tell us what's happening in that regard. >> No, absolutely. I'm looking forward to seeing you in the Cisco live next year, Dave. Absolutely. You brought up a very good point. UCS is about the ecosystem that it brings together. It's about making our customers bring up the entire infrastructure, from the core foundational hardware all the way to the application level so that they can all go off and running pretty quick. That converse infrastructure has been one of the cornerstones of our strategy, as you pointed out, in the last decade. And I'm very glad to share that conversed infrastructure continues to be a very popular architecture for several enterprise applications even today. In fact, it is the preferred architecture for mission critical applications, where performance, resiliency, latency, are the critical requirements. They are almost de facto standards for large scale deployments of virtualize and business critical databases and so forth. With X Series, with our partnerships, with our restorative partners, those architectures will absolutely continue and will get better. But in addition, it's a hybrid cloud world. So we are now bringing in the benefits of conversed infrastructure to the world of hybrid cloud. We'll be supporting the hybrid cloud applications now with the CA infrastructure that we have built together with our strong partnership with the store as partners to tell you with the same benefits to the new age applications as well. >> Yeah and that's what customers want, they want that cloud operating model. Right? Go ahead, please. >> I was just going to say, you know, that the CA model will continue to thrive. It will transition out, it will expand the use cases now for the newer use cases that we were beginning to see, Dave, absolutely. >> Great. Thank you for that. And James, like I said earlier today, we heard this huge announcement, a lot of parts to it. And we heard, you know, KD talk about this initiative is, it's really computing built for the next decade. I mean, I like that because it shows some vision and that you've got, you know, a roadmap, that you've thought through the coming changes in workloads and infrastructure management and some of the technology that you can take advantage of beyond just the, you know, one or two product cycles. So, but I want to understand what you've done here specifically that you feel differentiates you from other competitive architectures in the industry. >> Sure. You know, that's a great question. number one. Number two, I'm frankly a little bit concerned at times for customers in general, for our customers, customers in general, because if you look at what's in the market, right? These rinse and repeat systems that were effectively just rehashes of the same old design, right? That we've seen since before 2009 when we brought UCS to market, these are what we're seeing over and over and over again, that's not really going to work anymore, frankly. And I think that people are getting lulled into a false sense of security by seeing those things continually put in the market. We've rethought this from the ground up because frankly, you know, future-proofing starts now, right? If you're not doing it right today, future-proofing isn't even on your radar because you're not even, you're not even today-proofed. So we've rethought the entire chassis, the entire architecture, from the ground up. Okay. If you look at other vendors, if you look at other solutions in the market, what you'll see is things like, you know management inside the chassis. That's a great example. Daisy chaining them together. Like, who needs that? Who wants that? Like, that kind of complexity is, first of all, it's ridiculous. Second of all, if you want to manage across clouds you have to do it from the cloud, right? It's just common sense. You have to move management where it can have the scale and the scope that it needs to impact, you know, your entire domain, your world, which is much larger now than it was before. We're talking about true hybrid cloud here. Right? So, we had to, you know, solve certain problems that existed in the traditional architecture. You know, I can't tell you how many times I heard you know, talk about, you know, the mid plane is a great example. Well, you know, the mid plane in a chassis is a limiting factor. It limits us on how much we can connect or how much bandwidth we have available to the chassis. It limits us on air flow and other things. So how do you solve that problem? Simple. Just get rid of it. Like we just, we took it out, right? It's now no longer a problem. We designed an architecture that doesn't need it. It doesn't rely on it, no forklift upgrades. So as we start moving down the path of needing liquid cooling, or maybe we need to take advantage of some new high performance, low latency fabrics. We can do that with almost no problem at all, right? So we don't have any forklift upgrades. Park your forklift on the side. You won't need it anymore because you can upgrade granularly. You can move along as technologies come into existence that maybe don't even exist today. They may not even be on our radar today to take advantage of but I like to think of these technologies. You know, they're really important to our customers. These are, you know, we can call them disruptive technologies. The reality is that we don't want to disrupt our customers with these technologies. We want to give them these technologies so they can go out and be disruptive themselves, right? And this is the way that we've designed this, from the ground up, to be easy consume and to take advantage of what we know about today and what's coming in the future that we may not even know about. So we think this is a way to give our customers that ultimate capability, flexibility, and future-proofing. >> I like that phrase, true hybrid cloud. It's one that we've used for years. But to me, this is all about that horizontal infrastructure that can support that vision of what true hybrid cloud is. You could support the mission critical applications. You can develop on the system and you can support a variety of workloads. You're not locked into, you know, one narrow stovepipe. And that does have legs. Vikas and James, thanks so much for coming on the program. Great to see you. >> Thank you, we appreciate the time. >> Thank you. >> And thank you for watching. This is Dave Volante for theCube, the leader in digital event coverage. (uplifting music)
SUMMARY :
Brought to you by Cisco. and James Leach is the director What are the workloads that are driving Imagine the aspect of managing, you know, and maybe, you know, first of all, quite frankly, you know, the launch that we saw today, So we have, you know, this idea and the joint development as partners to tell you Yeah and that's what customers want, I was just going to say, you know, that And we heard, you know, KD talk about So, we had to, you know, You can develop on the system And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Vikas | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Katherine Kostereva | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Steve Wood | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Andy Anglin | PERSON | 0.99+ |
Eric Kurzog | PERSON | 0.99+ |
Kerry McFadden | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeff Clarke | PERSON | 0.99+ |
Landmark | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Katherine | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Forrester | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
2002 | DATE | 0.99+ |
Mandy Dhaliwal | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
Starbucks | ORGANIZATION | 0.99+ |
PolyCom | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Vijoy Pandey, Cisco | | Cisco Future Cloud
>>from around the globe it's the >>cube >>presenting >>Future Cloud one event. A >>world of >>opportunities >>brought to you by Cisco. We're here with Dejoy Pandey a VP of emerging tech and incubation at Cisco. V. Joy. Good to see you welcome. >>Good to see you as well. Thank you Dave and pleasure to be here. >>So in 2020 we kind of had to redefine the notion of agility when it came to digital business or you know organizations, they had to rethink their concept of agility and business resilience. What are you seeing in terms of how companies are thinking about their operations in this sort of new abnormal context? >>Yeah I think that's a great question I think what what we're seeing is that pretty much the application is the center of the universe and if you think about it the application is actually driving brand recognition and the brand experience and the brand value. So the example I like to give is think about a banking app uh recovered that did everything that you would expect it to do. But if you wanted to withdraw cash from your bank you would actually have to go to the ATM and punch in some numbers and then look at your screen and go through a process and then finally withdraw cash. Think about what that would have, what that would do in a post pandemic era where people are trying to go contact less. And so in a situation like this the digitization efforts that all of these companies are going through and the modernization of the automation is what is driving brand recognition, brand trust and brand experience. >>Yeah. So I was gonna ask you when I heard you say that, I was gonna say well but hasn't it always been about the application? But it's different now, isn't it? So I wonder if you talk more about how the application is experience is changing? Yes. As a result of this new digital mandate. But how should organizations think about optimizing those experiences in this new world? >>Absolutely. And I think, yes, it's always been about the application, but it's becoming the center of the universe right now because all interactions with customers and consumers and even businesses are happening through that application. So if the application is unreliable or if the application is not available is untrusted insecure, uh, there's a problem. There's a problem with the brand with the company and the trust that consumers and customers have with our company. So if you think about an application developer, the weight he or she is carrying on their shoulders is tremendous because you're thinking about rolling features quickly to be competitive. That's the only way to be competitive in this world. You need to think about availability and resiliency, like you pointed out and experience, you need to think about security and trust. Am I as a customer or consumer willing to put my data in that application? So velocity availability, security and trust and all of that depends on the developer. So the experience, the security, the trust, the feature velocity is what is driving the brand experience now. >>So are those two tensions that say agility and trust, you know, zero trust used to be a buzzword now, it's a mandate. But are those two vectors counter posed? Can they be merged into one and not affect each other? Does the question makes sense? Right? Security usually handcuffs my speed. But how do you address that? >>Yeah, that's a great question. And I think if you think about it today, that's the way things are. And if you think about this developer, all they want to do is run fast because they want to build those features out and they're going to pick and choose a purpose and services that matter to them and build up their app and they want the complexities of the infrastructure and security and trust to be handled by somebody else is not that they don't care about it, but they want that abstraction so that is handled by somebody else. And typically within an organization we've seen in the past where there's friction between Netapp, Succop cited hopes and the cloud platform teams and the developer on one side and these these frictions and these meetings and toil actually take a toll on the developer and that's why companies and apps and developers are not as agile as they would like to be. So I think, but it doesn't have to be that way. So I think if there was something that would allow a developer to pick and choose, discover the apis that they would like to use, connect those api is in a very simple manner and then be able to scale them out and be able to secure them and in fact not just secure them during the run time when it's deployed, we're right off the back when the fire up that I'd and start developing the application, wouldn't that be nice? And as you do that, there is a smooth transition between that discovery connectivity and ease of consumption and security with the idea cops, netapp psych ops teams and see source to ensure that they are not doing something that the organization won't allow them to do in a very seamless manner. >>I want to go back and talk about security but I want to add another complexity before we do that. So for a lot of organizations in the public cloud became a staple of keeping the lights on during the pandemic. But it brings new complexities and differences in terms of latency security, which I want to come back to deployment models etcetera. So what are some of the specific networking challenges that you've seen with the cloud? Native architecture is how are you addressing those? >>Yeah. In fact, if you think about cloud, to me that is a that is a different way of seeing a distributed system. And if you think about a distributed system, what is at the center of the distributed system is the network. So my my favorite comment here is that the network is the wrong time for all distribute systems and modern applications. And that is true because if you think about where things are today, like you said, there's there's cloud assets that a developer might use in the banking example that I gave earlier. I mean if you want to build a contact less app so that you get verified, a customer gets verified on the app. They walk over to the ATM and they were broadcast without touching that ATM. In that kind of an example, you're touching the mobile Rus, let's say, Ohio escapees, you're touching Cloud API is where the back end might sit, you're touching on primary purpose, maybe it's an oracle database or a mainframe even where transactional data exists, you're touching branch pipes were the team actually exists and the need for consistency when you withdraw cash and you're carrying all of this and in fact there might be customer data sitting in Salesforce somewhere. So it's cloud API is a song premise branch, it's ass is mobile and you need to bring all of these things together and over time you will see more and more of these API is coming from various as providers. So it's not just cloud providers but saAS providers that the developer has to use. And so this complexity is very very real and this complexity is across the wide open internet. So the application is built across this wide open internet. So the problems of discovery ability, the problems of being able to simply connect these apis and manage the data flow across these apis. The problems of consistency of policy and consumption because all of these areas have their own nuances and what they mean, what the arguments mean and what the A. P. I. Actually means. How do you make it consistent and easy for the developer? That is the networking problem. And that is a problem of building out this network, making traffic engineering easy making policy easy, making scale out, scale down easy, all of that our networking problems. And so we are solving those problems. Uh Francisco >>Yeah the internet is the new private network but it's not so private. So I want to go back to security. I often say that the security model of building a moat, you dig the moat, you get the hardened castle that's just outdated now that the queen is left her castle. I always say it's dangerous out there. And the point is you touched on this? It's it's a huge decentralized system and with distributed apps and data, that notion of perimeter security, it's just no longer valid. So I wonder if you could talk more about how you're thinking about this problem and you definitely address some of that in your earlier comments. But what are you specifically doing to address this? And how do you see it evolving? >>Yeah, I mean that that's that's a very important point. I mean I think if you think about again the wide open internet being the wrong time for all modern applications, what is perimeter security in this uh in this new world? I mean it's to me it boils down to securing an API because again, going with that running example of this contact lists cash withdrawal feature for a bank. The FBI wherever it sits on tram branch sas cloud, IOS android doesn't matter that FBI is your new security perimeter and the data object that is trying to access is also the new security perimeter. So if you can secure ap to ap communication and P two data object communication, you should be good. So that is the new frontier. But guess what? Software is buggy? Everybody's software not saying Cisco software, everybody's Softwares buggy. Uh software is buggy, humans are not reliable and so things mature, Things change, Things evolve over time. So there needs to be defense in depth. So you need to secure at the API layer had the data object layer, but you also need to secure at every layer below it so that you have good defense and depth if any layer in between is not working out properly. So for us that means ensuring ap to ap communication, not just during long time when the app has been deployed and is running, but during deployment and also during the development life cycle. So as soon as the developer launches an ID, they should be able to figure out that this API is security uses reputable. It has compliant, it is compliant to my my organization's needs because it is hosted, let's say from Germany and my organization wants a P is to be used only if they are being hosted out of Germany. So compliance needs and and security needs and reputation. Is it available all the time? Is it secure and being able to provide that feedback all the time between the security teams and the developer teams in a very seamless real time manner? Yes, again, that's something that we're trying to solve through some of the services that we're trying to produce in SAN Francisco. >>Yeah, I mean those that layered approach that you're talking about is critical because every layer has, you know, some vulnerability and so you you've got to protect that with some depth in terms of thinking about security, how should we think about where where Cisco's primary value add is, I mean it's parts of the interview has a great security business. Is growing business. Is it your intention to to to to add value across the entire value chain? I mean obviously you can't do everything so you've got a partner but so has the we think about Cisco's role over the next I'm thinking longer term over the over the next decade. >>Yeah, I mean I think so. We do come in with good strength from the runtime side of the house. So if you think about the security aspects that we haven't played today, uh there's a significant set of assets that we have around user security around around uh with with do and password less. We have significant assets in random security. I mean the entire portfolio that Cisco brings to the table is I don't run time security. The security checks aspects around posture and policy that will bring to the table. And as you see, Cisco evolve over time, you will see us shifting left. I mean I know it's an overused term, but that is where security is moving towards. And so that is where api security and data security are moving towards. So learning what we have during runtime. Because again, runtime is where you learn what's available and that's where you can apply all of the M. L. And I models to figure out what works what doesn't taking those learnings, Taking those catalogs, taking that reputation database and moving it into the deployment and development life cycle and making sure that that's part of that entire they have to deploy to runtime chain is what you will see Cisco do overtime. >>That's fantastic phenomenal perspective video. Thanks for coming on the cube. Great to have you and look forward to having you again. >>Absolutely. Thank you. Pleasure to be here. >>This is Dave Volonte for the cube. Thank you for watching. Mhm. >>Mhm mm.
SUMMARY :
Good to see you welcome. Good to see you as well. to digital business or you know organizations, they had to rethink their concept of agility and is the center of the universe and if you think about it the application is actually driving So I wonder if you talk more about how the application is experience is So if you think about an application developer, But how do you address that? And I think if you think about it today, that's the Native architecture is how are you addressing And that is true because if you think about where things are today, I often say that the security model of building a moat, you dig the moat, So as soon as the developer launches an ID, they should be able to figure out I mean obviously you can't do everything so you've got a partner but so has the we think about Cisco's role So if you think about the security aspects that we haven't played Great to have you and look forward to having you again. Pleasure to be here. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
SAN Francisco | LOCATION | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
Dejoy Pandey | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Vijoy Pandey | PERSON | 0.99+ |
two tensions | QUANTITY | 0.99+ |
two vectors | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
IOS | TITLE | 0.98+ |
Ohio | LOCATION | 0.96+ |
pandemic | EVENT | 0.92+ |
one | QUANTITY | 0.91+ |
one side | QUANTITY | 0.89+ |
zero trust | QUANTITY | 0.87+ |
Netapp | ORGANIZATION | 0.86+ |
next decade | DATE | 0.85+ |
V. Joy | PERSON | 0.79+ |
Cloud API | TITLE | 0.79+ |
Cisco Future Cloud | ORGANIZATION | 0.77+ |
Francisco | PERSON | 0.72+ |
Salesforce | TITLE | 0.72+ |
M. L. | PERSON | 0.72+ |
Succop | ORGANIZATION | 0.68+ |
agile | TITLE | 0.6+ |
android | TITLE | 0.6+ |
Cloud | EVENT | 0.53+ |
Thomas Scheibe | Cisco Future Cloud
(upbeat music) >> Narrator: From around the globe, it's theCUBE. Presenting Future Cloud. One event, a world of opportunities. Brought to you by Cisco. >> Okay. We're here with Thomas Scheibe, who's the vice president of Product Management, aka VP of all things Data Center Networking, STN, cloud, you name it in that category. Welcome Thomas, good to see you again. >> Hey, same here. Thanks for having me on. >> Yeah, it's our pleasure. Okay. Let's get right into observability. When you think about observability, visibility, infrastructure monitoring, problem resolution across the network, how does cloud change things? In other words, what are the challenges that networking teams are currently facing as they're moving to the cloud and trying to implement hybrid cloud? >> Yeah. (scoffs) Yeah. Visibility as always is very, very important and it's quite frankly, it's not just, it's not just the networking team, it's actually the application team too, right? And as you pointed out, the the underlying impetus to what's going on here is the, the data center is wherever the data is, and I think we said this a couple years back. And really what happens the, the applications are going to be deployed in different locations, right? Whether it's in a public cloud, whether it's on-prem and they're built differently, right? They're built as micro servers, so they might actually be distributed as well at the same application. And so what that really means is you need, as an operator as well as actually a user, a better visibility, "where are my pieces?", and you need to be able to correlate between where the app is and what the underlying network is, that is in place in these different locations. So you have actually a good knowledge why the app is running so fantastic or sometimes not. So I think that's, that's really the problem statement. What, what we're trying to go after with observability. >> Okay. Let's, let's double click on that. So, so a lot of customers tell me that you got to stare at log files until your eyes bleed, then you've got to bring in guys with lab coats who have PhDs to figure all this stuff out. >> Thomas: Yeah. >> So you just described, it's getting more complex, but at the same time, you have to simplify things. So how, how are you doing that? >> Correct. So what we basically have done is we have this fantastic product that is called ThousandEyes. And so what this does is basically (chuckles) as the name which I think is a fantastic, fantastic name. You have these sensors everywhere and you can have a good correlation on links between if I run a from a site to a site, from a site to a cloud, from the cloud to cloud. And you basic can measure what is the performance of these links? And so what we're, what we're doing here is we're actually extending the footprint of the ThousandEyes agent, right? Instead of just having a, an inversion machine of clouds we are now embedding them with the Cisco network devices, right? We announced this was the Catalyst 9000. And we're extending this now to our 8000 Catalyst product line for the for the SD-WAN products, as well as to the data center products, in Nexus line. And so what you see is, is you know, a half a thing, you have ThousandEyes. You get a million insights and you get a billion dollar off improvements for how your applications run. And this is really the, the power of tying together the footprint of what a network is with the visibility, what is going on. So you actually know the application behavior that is attached to this network. >> I see. So, okay. So as the cloud evolves, it expands, it connects, you're actually enabling ThousandEyes to go further, not just confined within a single data center location but out to the network across clouds, et cetera. >> Thomas: Correct. >> Wherever the network is you're going to have a ThousandEyes sensor and you can bring this together and you can quite frankly pick, if you want to say, Hey I have my application in public cloud provider A domain one, and I have another one in domain two I can do monitor that link. I can also monitor, I have a user that has a campus location or a branch location. I kind of put an agent there and then I can monitor the connectivity from that branch location all the way to the, let's say, corporation's data center or headquarter or to the cloud. And I can have these probes and just the, have visibility in saying, Hey, if there's a performance I know where the issue is. And then I obviously can use all the other tools that we have to address those. >> All right, let's talk about the cloud operating model. Everybody tells us that, you know, it's really the change in the model that drives big numbers in terms of ROI. And I want you to maybe address how you're bringing automation and DevOps to this world of hybrid and specifically, how is Cisco enabling IT organizations to move to a cloud operating model as that cloud definition expands? >> Yeah, no, that's that's another interesting topic beyond the observability. So it really, really what we're seeing, and this is going on for, I want to say couple of years now it's really this transition from operating infrastructure as a networking team, more like a service like what you would expect from a cloud provider, right? This is really around the networking team offering services like a cloud provided us. And that's really what the meaning is of cloud operating model, right? Where this is infrastructure running your own data center where that's linking that infrastructure was whatever runs on the public cloud is operating it like a cloud service. And so we are on this journey for a while. So one of the examples um that we have, we're moving some of the control software assets that customers today can deploy on-prem to an instance that they can deploy in a, in a cloud provider and just basically instantiate things there and then just run it that way. Right? And so the latest example for this is what we have, our Identity Service Engine that is now unlimited availability, available on AWS and will become available mid this year, both on AWS and Azure, as a service. You can just go to Marketplace, you can load it there and now increase. You can start running your policy control in the cloud managing your access infrastructure in your data center, in your campus, wherever you want to do it. And so that's just one example of how we see our Customers Network Operations team taking advantage of a cloud operating model and basically deploying their, their tools where they need them and when they need them. >> Dave: So >> What's the scope of I, I hope I'm saying it right, ISE, right? I.S.E, I think it's um, you call it ISE. What's the scope of that? Like for instance, to an effect my, or even, you know address, simplify my security approach? >> Absolutely. That's now coming to what is the beauty of the product itself? Yes. What you can do is really is, a lot of people talking about is, how do I get to a Zero Trust approach to networking? How do I get to a much more dynamic, flexible segmentation in my infrastructure, again, whether this was only campus access as well as the data center and ISE helps you there. You can use it as a pawn to define your policies and then inter-connect from there, right. In this particular case, we would, instead of ISE in a cloud as a software, alone, you now can connect and say, Hey, I want to manage and program my network infrastructure and my data center or my campus going to the respective controller, whether it's DNA Center for campus or whether it's the, the ACI policy controller. And so yes, what you get as an effect out of this is a very elegant way to automatically manage ,in one place, "what is my policy", and then drive the right segmentation in your network infrastructure. >> Yeah. Zero Trust. It was..Pre pandemic it was kind of a buzzword, now it's become a mandate. I, I wonder if we could talk about- >> Thomas: - Yes >> Yeah, right. I mean, so- >> Thomas: -Pretty much. >> I wondered if we could talk about cloud native apps. You got all these developers that are working inside organizations, they're maintaining legacy apps they're connecting their data to systems in the cloud. They're sharing that data. These developers, they're rapidly advancing their skillsets. How is Cisco enabling its infrastructure to support this world of cloud native, making infrastructure more responsive and agile for application developers? >> Yeah. So you were going to the talk we saw was the visibility. We talked about the operating model how our network operates actually want to use tools going forward. Now the next step to this is, it's not just the operator. How do they actually, where do they want to put these tools? Or how they interact with this tools? As well as quite frankly, as how let's say, a DevOps team, or application team or a cloud team also wants to take advantage of the programmability of the underlying network. And this is where we're moving into this whole cloud native discussion, right. Which has really two angles. So it's the cloud native way, how applications are being built. And then there is the cloud native way, how you interact with infrastructure, right? And so what we have done is we're A, putting in place the on-ramps between clouds, and then on top of it, we're exposing for all these tools APIs that can be used and leveraged by standard cloud tools or cloud-native tools, right? And one example or two examples we always have. And again, we're on this journey for a while, is both Ansible script capabilities that access from RedHat as well as Hashi Terraform capabilities that you can orchestrate across infrastructure to drive infrastructure automation. And what, what really stands behind it is what either the networking operations team wants to do or even the app team. They want to be able to describe the application as a code and then drive automatically or programmatically instantiation of infrastructure needed for that application. And so what you see us doing is providing all these capability as an interface for all our network tools, right. Whether this is ISE, what I just mentioned, whether this is our DCN controllers in the data center whether these are the controllers in the, in the campus for all of those, we have cloud-native interfaces. So operator or a DevOps team can actually interact directly with that infrastructure the way they would do today with everything that lives on the cloud or with everything how they built the application. >> Yeah, this is key. You can't even have the conversation of of Op cloud operating model that includes and comprises on-prem without programmable infrastructure. So that's, that's very important. Last question, Thomas, are customers actually using this? You made the announcement today. Are there, are there any examples of customers out there doing this? >> We do have a lot of customers out there that are moving down the path and using the Cisco High-performance Infrastructure both on the compute side, as well as on the Nexus side. One of the costumers, and this is like an interesting case, is Rakuten. Rakuten is a large telco provider, a mobile 5G operator in Japan and expanding, and as in different countries. And so people, some think, "Oh cloud" "You must be talking about the public cloud provider" "the big three or four". But if you look at it, there's a lot of the telco service providers are actually cloud providers as well and expanding very rapidly. And so we're actually very proud to work together with Rakuten and help them build high performance data center infrastructure based on HANA Gig and actually for a gig to drive their deployment to its 5G mobile cloud infrastructure, which is which is where the whole the whole world, which frankly is going. And so it's really exciting to see this development and see the power of automation visibility together with the High-performance infrastructure becoming a reality on delivering actually, services. >> Yeah, some great points you're making there. Yes, you have the big four clouds, they're enormous but then you have a lot of actually quite large clouds telcos that are either proximate to those clouds or they're in places where those hyper-scalers may not have a presence and building out their own infrastructure. So, so that's a great case study. Thomas.Hey, great having you on. Thanks much for spending some time with us. >> Yeah, same here. I appreciate it. Thanks a lot. >> All right. And thank you for watching everybody. This is Dave Vellante for theCUBE, the leader in tech event coverage. (upbeat music)
SUMMARY :
Brought to you by Cisco. Welcome Thomas, good to see you again. Thanks for having me on. as they're moving to the cloud And so what that really means is you need, that you got to stare at log but at the same time, you And so what you see is, is So as the cloud evolves, and you can bring this together And I want you to maybe address how And so the latest example What's the scope of I, And so yes, what you get was kind of a buzzword, I mean, so- to support this world And so what you see us You can't even have the conversation of and see the power of but then you have a lot of I appreciate it. And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Japan | LOCATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
Thomas Scheibe | PERSON | 0.99+ |
two examples | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ThousandEyes | ORGANIZATION | 0.99+ |
one example | QUANTITY | 0.99+ |
mid this year | DATE | 0.99+ |
two angles | QUANTITY | 0.99+ |
ACI | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
HANA Gig | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
One event | QUANTITY | 0.97+ |
8000 | COMMERCIAL_ITEM | 0.96+ |
four | QUANTITY | 0.96+ |
ISE | TITLE | 0.95+ |
one place | QUANTITY | 0.94+ |
Data Center Networking | ORGANIZATION | 0.91+ |
billion dollar | QUANTITY | 0.91+ |
Cisco Future Cloud | ORGANIZATION | 0.9+ |
STN | ORGANIZATION | 0.87+ |
a million insights | QUANTITY | 0.86+ |
a couple years back | DATE | 0.86+ |
three | QUANTITY | 0.85+ |
pandemic | EVENT | 0.82+ |
Catalyst 9000 | COMMERCIAL_ITEM | 0.82+ |
RedHat | TITLE | 0.81+ |
double | QUANTITY | 0.8+ |
theCUBE | ORGANIZATION | 0.78+ |
single data center | QUANTITY | 0.76+ |
Hashi Terraform | TITLE | 0.75+ |
couple | QUANTITY | 0.75+ |
DevOps | ORGANIZATION | 0.73+ |
Azure | TITLE | 0.71+ |
half a thing | QUANTITY | 0.66+ |
Thomas.Hey | PERSON | 0.64+ |
Marketplace | TITLE | 0.62+ |
years | QUANTITY | 0.6+ |
Catalyst | ORGANIZATION | 0.58+ |
two | QUANTITY | 0.58+ |
domain | QUANTITY | 0.56+ |
Nexus | COMMERCIAL_ITEM | 0.47+ |
Ansible | ORGANIZATION | 0.38+ |
LIVE Panel: Container First Development: Now and In the Future
>>Hello, and welcome. Very excited to see everybody here. DockerCon is going fantastic. Everybody's uh, engaging in the chat. It's awesome to see. My name is Peter McKee. I'm the head of developer relations here at Docker and Taber. Today. We're going to be talking about container first development now and in the future. But before we do that, a couple little housekeeping items, first of all, yes, we are live. So if you're in our session, you can go ahead and chat, ask us questions. We'd love to get all your questions and answer them. Um, if you come to the main page on the website and you do not see the chat, go ahead and click on the blue button and that'll die. Uh, deep dive you into our session and you can interact with the chat there. Okay. Without further ado, let's just jump right into it. Katie, how are you? Welcome. Do you mind telling everybody who you are and a little bit about yourself? >>Absolutely. Hello everyone. My name is Katie and currently I am the eco-system advocate at cloud native computing foundation or CNCF. My responsibility is to lead and represent the end-user community. So these are all the practitioners within the cloud native space that are vendor neutral. So they use cloud native technologies to build their services, but they don't sell it. So this is quite an important characteristic as well. My responsibility is to make sure to close the gap between these practitioners and the project maintainers, to make sure that there is a feedback loop around. Um, I have many roles within the community. I am on the advisory board for KIPP finishes, a sandbox project. I'm working with open UK to make sure that Elton standards are used fairly across data, hardware, and software. And I have been, uh, affiliated way if you'd asked me to make sure that, um, I'm distributing a cloud native fundamental scores to make cloud and they do a few bigger despite everyone. So looking forward to this panel and checking with everyone. >>Awesome. Yeah. Welcome. Glad to have you here. Johanna's how are you? Can you, uh, tell everybody a little bit about yourself and who you are? Yeah, sure. >>So hi everybody. My name is Johannes I'm one of the co-founders at get pot, which in case you don't know is an open-source and container based development platform, which is probably also the reason why you Peter reached out and invited me here. So pleasure to be here, looking forward to the discussion. Um, yeah, though it is already a bit later in Munich. Um, and actually my girlfriend had a remote cocktail class with her colleagues tonight and it took me some stamina to really say no to all the Moscow mules that were prepared just over there in my living room. Oh wow. >>You're way better than me. Yeah. Well welcome. Thanks for joining us. Jerome. How are you? Good to see you. Can you tell everybody who you are and a little bit about yourself? Hi, >>Sure. Yeah, so I'm, I, I used to work at Docker and some, for me would say I'm a container hipster because I was running containers in production before it for hype. Um, I worked at Docker before it was even called Docker. And then since 2018, I'm now a freelancer and doing training and consulting around Docker containers, Kubernetes, all these things. So I used to help folks do stuff with Docker when I was there and now I still have them with containers more generally speaking. So kind of, uh, how do we say same, same team, different company or something like that? Yeah. >>Yeah. Perfect. Yeah. Good to see you. I'm glad you're on. Uh, Jacob, how are you? Good to see you. Thanks for joining us. Good. Yeah. Thanks for having me tell, tell everybody a little bit about yourself who you are. >>Yeah. So, uh, I'm the creator of a tool called mutagen, which is an open source, uh, development tool for doing high performance file synchronization and, uh, network forwarding, uh, to enable remote development. And so I come from like a physics background where I was sort of always doing, uh, remote developments, you know, whether that was on a big central clusters or just like some sort of local machine that was a bit more powerful. And so I, after I graduated, I built this tool called mutagen, uh, for doing remote development. And then to my surprise, people just started using it to use, uh, with Docker containers. And, uh, that's kind of grown into its primary use case now. So I'm, yeah, I've gotten really involved with the Docker community and, uh, talked with a lot of great people and now I'm one of the Docker captains. So I get to talk with even more and, and join these events and yeah, but I'm, I'm kind of focused on doing remote development. Uh, cause I, you know, I like, I like having all my tools available on my local machine, but I also like being able to pull in a little bit more powerful hardware or uh, you know, maybe a software that I can't run locally. And so, uh, that's sort of my interest in, in Docker container. Yeah. Awesome. >>Awesome. We're going to come back to that for sure. But yeah. Thank you again. I really appreciate you all joining me and yeah. So, um, I've been thinking about container first development for a while and you know, what does that actually mean? So maybe, maybe we can define it in our own little way. So I, I just throw it out to the panel. When you think about container first development, what comes to mind? What w what, what are you kind of thinking about? Don't be shy. Go ahead. Jerome. You're never a loss of words >>To me. Like if I go back to the, kind of the first, uh, you know, training engagements we did back at Docker and kind of helping folks, uh, writing Dockerfiles to stop developing in containers. Um, often we were replacing, um, uh, set up with a bunch of Vagrant boxes and another, like the VMs and combinations of local things. And very often they liked it a lot and they were very soon, they wanted to really like develop in containers, like run this microservice. This piece of code is whatever, like run that in containers because that means they didn't have to maintain that thing on their own machine. So that's like five years ago. That's what it meant to me back then. However, today, if you, if you say, okay, you know, developing in containers, um, I'm thinking of course about things like get bought and, uh, I think it's called PR or something like that. >>Like this theme, maybe that thing with the ESCO, that's going to run in a container. And you, you have this vs code thing running in your browser. Well, obviously not in your browser, but in a container that you control from your browser and, and many other things like that, that I, I think that's what we, where we want to go today. Uh, and that's really interesting, um, from all kinds of perspectives, like Chevy pair pairing when we will not next to each other, but actually thousands of miles away, um, or having this little environment that they can put aside and come back to it later, without it having using resource in my machine. Um, I don't know, having this dev service running somewhere in the cloud without needing something like, it's at the rights that are like the, the possibilities are really endless. >>Yeah. Yeah. Perfect. Yeah. I'm, you know, a little while ago I was, I was torn, right. W do I spin up containers? Do I develop inside of my containers? Right. There's foul sinking issues. Um, you know, that we've been working on at Docker for a while, and Jacob is very, very familiar with those, right? Sometimes it, it becomes hard, but, and I, and I love developing in the cloud, but I also have this screaming, you know, fast machine sitting on my desktop that I think I should take advantage of. So I guess another question is, you know, should we be developing inside of containers? Is that a smart thing to do? Uh, I'd love to hear you guys' thoughts around that. >>You know, I think it's one of those things where it's, you know, for me container first development is really about, um, considering containers as sort of a first class citizen in, in terms of your development toolkit, right. I mean, there's not always that silver bullet, that's like the one thing you should use for everything. You know, you shouldn't, you shouldn't use containers if they're not fitting in or adding value to your workflow, but I think there's a lot of scenarios that are like, you know, super on super early on in the development process. Like as soon as you get the server kind of running and working and, you know, you're able to access it, you know, running on your local system. Uh that's I think that's when the value comes in to it to add containers to, you know, what you're doing or to your project. Right. I mean, for me, they're, um, they're more of a orchestrational tool, right? So if I don't have to have six different browser tabs open with like, you know, an API server running at one tab and a web server running in another tab and a database running in another tab, I can just kind of encapsulate those and, and use them as an automation thing. So I think, you know, even if you have a super powerful computer, I think there's still value in, um, using containers as, as a orchestrational mechanism. Yeah. Yeah, >>For sure. I think, I think one of the, one of my original aha moments with Docker was, oh, I can spin up different versions of a database locally and not have to install it and not have to configure it and everything, but, you know, it just ran inside of a container. And that, that was it. Although it's might seem simple to some people that's very, very powerful. Right. So I think being able to spin things up and containers very quickly is one of the super benefits. But yeah, I think, uh, developing in containers is, is hard right now, right. With, um, you know, and how do you do that? Right. Does anybody have any thoughts around, how do you go about that? Right. Should you use a container as just a development environment, so, you know, creating an image and then running it just with your dev tools in it, or do you just, uh, and maybe with an editor all inside of it, and it's just this process, that's almost like a VM. Um, yeah. So I'll just kick it back to the panel. I'd love to hear your thoughts on, you know, how do you set up and configure, uh, containers to develop in any thoughts around that? >>Maybe one step back again, to answer your question, what kind of container first development mean? I think it doesn't mean, um, by default that it has to be in the cloud, right? As you said, um, there are obvious benefits when it comes to the developer experience of containers, such as, I dunno, consistency, we have standardized tools dependencies for the dev side of things, but it also makes their dev environment more similar to all the pipeline that is somehow happening to the right, right. So CIC D all the way to production, it is security, right? Which also somehow comes with standardization. Um, but vulnerability scanning tools like sneak are doing a great job there. And, um, for us, it gets pod. One of the key reasons why we created get pod was literally creating this peace of mind for deaths. So from a developer's point of view, you do not need to take care anymore about all the hassle around setups and things that you will need to install. >>And locally, based on some outdated, REIT me on three operating systems in your company, everybody has something different and leading to these verbs in my machine situations, um, that really slow professional software developers down. Right. Um, back to your point, I mean, with good pod, we obviously have to package everything together in one container because otherwise, exactly the situation happens that you need to have five browser tabs open. So we try and leverage that. And I think a dev environment is not just the editor, right? So a dev environment includes your source code. It includes like a powerful shell. It includes file systems. It includes essentially all the tools you need in order to be productive databases and so on. And, um, yeah, we believe that should be encapsulated, um, um, in a container. >>Yeah. Awesome. Katie, you talked to a lot of end users, right. And you're talking to a lot of developers. What, what's your thoughts around container first development, right? Or, or what's the community out there screaming or screaming. It might be too to, uh, har you know, to, to two grand of the word. Right. But yeah, I love it. I love to hear what your, your thoughts. >>Absolutely. So I think when you're talking about continuing driven development, uh, the first thing that crosses my mind is the awareness of the infrastructure or the platform you're going to run your application on top of, because usually when you develop your application, you'd like to replicate as much as possible the production or even the staging environment to make sure that when you deploy your application, you have us little inconsistencies as possible, but at the same time, you minimize the risk for something to go wrong as well. So when it talking about the, the community, um, again, when you deploy applications and containers and Kubernetes, you have to use, you have awareness about, and probably apply some of the best practices, like introducing liveliness and readiness probes, to make sure that your application can restart in, in case it actually goes down or there's like a you're starving going CPU or something like that. >>So, uh, I think when it comes to deployment and development of an application, the main thing is to actually improve the end developer experience. I think there has been a lot of focus in the community to develop the tool, to actually give you the right tool to run application and production, but that doesn't necessarily, um, go back to how the end developer is actually enabling that application to run into that production system. So I think there has been, uh, this focus for the community identified now, and it's more, more, um, or trying to build momentum on enhancing the developer experience. And we've seen this going through many, uh, where we think production of many tools did what has been one of them, which actually we can have this portable, um, development environment if you choose so, and you can actually replicate them across different teams in different machines, which is actually quite handy. >>But at the same time, we had tools such as local composts has been a great tool to run locally. We have tool such as carefully, which is absolutely great to automatically dynamically upload any changes to how within your code. So I think all of these kinds of tools, they getting more matured. And again, this is going back to again, we need to enhance our developer experience coming back to what is the right way to do so. Um, I think it really depends on the environment you have in production, because there's going to define some of the structures with the tool and you're going to have internally, but at the same time, um, I'd like to say that, uh, it really depends on, on what trucks are developing. Uh, so it's, it's, I would like to personally, I would like to see a bit more diversification in this area because we might have this competitive solutions that is going to push us to towards a new edge. So this is like, what definitely developer experience. If we're talking about development, that's what we need to enhance. And that's what I see the momentum building at the moment. >>Yeah. Yeah. Awesome. Jerome, I saw you shaking your head there in agreement, or maybe not, but what's your thoughts? >>I was, uh, I was just reacting until 82. Uh, it depends thinking that when I, when I do training, that's probably the answer that I gave the most, uh, each time somebody asks, oh, should we do diesel? And I was also looking at some of the questions in the chat about, Hey, the, should we like have a negatory in the, in the container or something like that. And folks can have pretty strong opinions one way or the other, but as a ways, it kind of depends what we do. It also depends of the team that we're working with. Um, you, you could have teams, you know, with like small teams with folks with lots of experience and they all come with their own Feb tools and editorials and plugins. So you know that like you're gonna have PRI iMacs out of my cold dead hands or something like that. >>So of course, if you give them something else, they're going to be extremely unhappy or sad. On the other hand, you can have team with folks who, um, will be less opinionated on that. And even, I don't know, let's say suddenly you start working on some project with maybe a new programming language, or maybe you're targeting some embedded system or whatever, like something really new and different. And you come up with all the tools, even the ADE, the extensions, et cetera, folks will often be extremely happy in that case that you're kind of giving them a Dettol and an ADE, even if that's not what they usually would, uh, would use, um, because it will come with all of the, the, the nice stage, you know, the compression, the, um, the, the, the bigger, the, whatever, all these things. And I think there is also something interesting to do here with development in containers. >>Like, Hey, you're going to start working on this extremely complex target based on whatever. And this is a container that has everything to get started. Okay. Maybe it's not your favorites editor, but it has all the customization and the conserver and whatever. Um, so you can start working right away. And then maybe later you, we want to, you know, do that from the container in a way, and have your own Emacs, atom, sublime, vs code, et cetera, et cetera. Um, but I think it's great for containers here, as well as they reserve or particularly the opportunity. And I think like the, that, that's one thing where I see stuff like get blood being potentially super interesting. Um, it's hard for me to gauge because I confess I was never a huge ID kind of person had some time that gives me this weird feeling, like when I help someone to book some, some code and you know, that like with their super nice IDE and everything is set up, but they feel kind of lost. >>And then at some point I'm like, okay, let's, let's get VI and grep and let's navigate this code base. And that makes me feel a little bit, you know, as this kind of old code for movies where you have the old, like colorful guy who knows going food, but at the end ends up still being obsolete because, um, it's only a going for movies that whole good for masters and the winning right. In real life, we don't have conformance there's anymore mentioned. So, um, but part of me is like, yeah, I like having my old style of editor, but when, when the modern editorial modern ID comes with everything set up and configured, that's just awesome. That's I, um, it's one thing that I'm not very good at sitting up all these little things, but when somebody does it and I can use it, it's, it's just amazing. >>Yeah. Yeah. I agree. I'm I feel the same way too. Right. I like, I like the way I've I have my environment. I like the tools that I use. I like the way they're set up. And, but it's a big issue, right? If you're switching machines, like you said, if you're helping someone else out there, they're not there, your key bindings aren't there, you can't, you can't navigate their system. Right? Yeah. So I think, you know, talking about, uh, dev environments that, that Docker's coming out with, and we're, you know, there's a lot, there, there's a, it's super complex, all these things we're talking about. And I think we're taking the approach of let's do something, uh, well, first, right. And then we can add on to that. Right. Because I think, you know, setting up full, full developed environments is hard, right. Especially in the, the, um, cloud native world nowadays with microservices, do you run them on a repo? >>Do you not have a monitor repo? Maybe that would be interesting to talk about. I think, um, you know, I always start out with the mono repos, right. And you have all your services in there and maybe you're using one Docker file. And then, because that works fine. Cause everything is JavaScript and node. And then you throw a little Python in there and then you throw a little go and now you start breaking things out and then things get too complex there, you know, and you start pulling everything out into different, get repos and now, right. Not everything just fits into these little buckets. Right. So how do you guys think maybe moving forward, how do we attack that night? How do we attack these? Does separate programming languages and environments and kind of bring them all together. You know, we, we, I hesitate, we solve that with compose around about running, right about executing, uh, running your, your containers. But, uh, developing with containers is different than running containers. Right. It's a, it's a different way to think about it. So anyway, sorry, I'm rattling on a little bit, but yeah. Be interesting to look at a more complex, uh, setup right. Of, uh, of, you know, even just 10 microservices that are in different get repos and different languages. Right. Just some thoughts. And, um, I'm not sure we all have this flushed out yet, but I'd love to hear your, your, you guys' thoughts around that. >>Jacob, you, you, you, you look like you're getting ready to jump there. >>I didn't wanna interrupt, but, uh, I mean, I think for me the issue isn't even really like the language boundary or, or, um, you know, a sub repo boundary. I think it's really about, you know, the infrastructure, right? Because you have, you're moving to an era where you have these cloud services, which, you know, some of them like S3, you can, you can mock up locally, uh, or run something locally in a container. But at some point you're going to have like, you know, cloud specific hardware, right? Like you got TPS or something that maybe are forming some critical function in your, in your application. And you just can't really replicate that locally, but you still want to be able to develop against that in some capacity. So, you know, my, my feeling about where it's going to go is you'll end up having parts of your application running locally, but then you also have, uh, you know, containers or some other, uh, element that's sort of cohabitating with, uh, you know, either staging or, or testing or production services that you're, uh, that you're working with. >>So you can actually, um, you know, test against a really or realistic simulation or the actual, uh, surface that you're running against in production. Because I think it's just going to become untenable to keep emulating all of that stuff locally, or to have to like duplicate these, you know, and, you know, I guess you can argue about whether or not it's a good thing that, that everything's moving to these kind of more closed off cloud services, but, you know, the reality of situation is that's where it's going to go. And there's certain hardware that you're going to want in the cloud, especially if you're doing, you know, machine learning oriented stuff that there's just no way you're going to be able to run locally. Right. I mean, if you're, even if you're in a dev team where you have, um, maybe like a central machine where you've got like 10 or 20 GPU's in it, that's not something that you're going to be able to, to, to replicate locally. And so that's how I kind of see that, um, you know, containers easing that boundary between different application components is actually maybe more about co-location, um, or having different parts of your application run in different locations, on different hardware, you know, maybe someone on your laptop, maybe it's someone, you know, AWS or Azure or somewhere. Yeah. It'd be interesting >>To start seeing those boundaries blur right. Working local and working in the cloud. Um, and you might even, you might not even know where something is exactly is running right until you need to, you know, that's when you really care, but yeah. Uh, Johanas, what's your thoughts around that? I mean, I think we've, we've talked previously of, of, um, you know, hybrid kind of environments. Uh, but yeah. What, what's your thoughts around that? >>Um, so essentially, yeah, I think, I mean, we believe that the lines between cloud and local will also potentially blur, and it's actually not really about that distinction. It's just packaging your dev environment in a way and provisioning your dev environment in a way that you are what we call always ready to coat. So that literally, um, you, you have that for the, you described as, um, peace of mind that you can just start to be creative and start to be productive. And if that is a container potentially running locally and containers are at the moment. I think, you know, the vehicle that we use, um, two weeks ago, or one week ago actually stack blitz announced the web containers. So potentially some things, well, it's run in the browser at some point, but currently, you know, Docker, um, is the standard that enables you to do that. And what we think will happen is that these cloud-based or local, um, dev environments will be what we call a femoral. So it will be similar to CIS, um, that we are using right now. And it doesn't literally matter, um, where they are running at the end. It's just, um, to reduce friction as much as possible and decrease and yeah, yeah. Essentially, um, avoid or the hustle that is currently involved in setting up and also managing dev environments, um, going forward, which really slows down specifically larger teams. >>Yeah. Yeah. Um, I'm going to shift gears a little bit here. We have a question from the audience in chat, uh, and it's, I think it's a little bit two parts, but so far as I can see container first, uh, development, have the challenges of where to get safe images. Um, and I was going to answer it, but let me keep it, let me keep going, where to get safe images and instrumentation, um, and knowing where exactly the problem is happening, how do we provide instrument instrumentation to see exactly where a problem might be happening and why? So I think the gist of it is kind of, of everything is in a container and I'm sitting outside, you know, the general thought around containers is isolation, right. Um, so how do I get views into that? Um, whether debugging or, or, or just general problems going on. I think that's maybe a broader question around the, how you, you know, you have your local hosts and then you're running everything containers, and what's the interplay there. W what's your thoughts there? >>I tend to think that containers are underused interactively. I mean, I think in production, you have this mindset that there's sort of this isolated environment, but it's very, actually simple to drop into a shell inside of a container and use it like you would, you know, your terminal. Um, so if you want to install software that way, you know, through, through an image rather than through like Homebrew or something, uh, you can kind of treat containers in that way and you can get a very, um, you know, direct access to the, to the space in which those are running in. So I think, I think that's maybe the step one is just like getting rid of that mindset, that, that these are all, um, you know, these completely encapsulated environments that you can't interact with because it's actually quite easy to just Docker exec into a container and then use it interactively >>Yeah. A hundred percent. And maybe I'll pass, I'm going to pass this question. You drone, but maybe demystify containers a little bit when I talked about this on the last, uh, panel, um, because we have a question in the, in the chat around, what's the, you know, why, why containers now I have VMs, right? And I think there's a misunderstanding in the industry, uh, about what, what containers are, we think they're fair, packaged stuff. And I think Jacob was hitting on that of what's underneath the hood. So maybe drown, sorry, for a long way to set up a question of what, what, what makes up a container, what is a container >>Is a container? Well, I, I think, um, the sharpest and most accurate and most articulate definition, I was from Alice gold first, and I will probably misquote her, but she said something like containers are a bunch of capsulated processes, maybe running on a cookie on welfare system. I'm not sure about the exact definition, but I'm going to try and, uh, reconstitute that like containers are just processes that run on a Unix machine. And we just happen to put a bunch of, um, red tape or whatever around them so that they are kind of contained. Um, but then the beauty of it is that we can contend them as much, or as little as we want. We can go kind of only in and put some actual VM or something like firecracker around that to give some pretty strong angulation, uh, all we can also kind of decontam theorize some aspects, you know, you can have a container that's actually using the, um, the, um, the network namespace of the host. >>So that gives it an entire, you know, wire speed access to the, to the network of the host. Um, and so to me, that's what really interesting, of course there is all the thing about, oh, containers are lightweight and I can pack more of them and they start fast and the images can be small, yada yada, yada. But to me, um, with my background in infrastructure and building resilient, things like that, but I find really exciting is the ability to, you know, put the slider wherever I need it. Um, the, the, the ability to have these very light containers, all very heavily, very secure, very anything, and even the ability to have containers in containers. Uh, even if that sounds a little bit, a little bit gimmicky at first, like, oh, you know, like you, you did the Mimi, like, oh, I heard you like container. >>So I put Docker when you're on Docker. So you can run container for you, run containers. Um, but that's actually extremely convenient because, um, as soon as you stop building, especially something infrastructure related. So you challenge is how do you test that? Like, when we were doing.cloud, we're like, okay, uh, how do we provision? Um, you know, we've been, if you're Amazon, how do you provision the staging for us installed? How do you provision the whole region, Jen, which is actually staging? It kind of makes things complicated. And the fact that we have that we can have containers within containers. Uh, that's actually pretty powerful. Um, we're also moving to things where we have secure containers in containers now. So that's super interesting, like stuff like a SIS box, for instance. Um, when I saw that, that was really excited because, uh, one of the horrible things I did back in the days as Docker was privileged containers, precisely because we wanted to have Docker in Docker. >>And that was kind of opening Pandora's box. That's the right, uh, with the four, because privileged containers can do literally anything. They can completely wreck up the machine. Um, and so, but at the same time, they give you the ability to run VPNs and run Docker in Docker and all these cool things. You can run VM in containers, and then you can list things. So, um, but so when I saw that you could actually have kind of secure containers within containers, like, okay, there is something really powerful and interesting there. And I think for folks, well, precisely when you want to do development in containers, especially when you move that to the cloud, that kind of stuff becomes a really important and interesting because it's one thing to have my little dev thing on my local machine. It's another thing when I want to move that to a swarm or Kubernetes cluster, and then suddenly even like very quickly, I hit the wall, which is, oh, I need to have containers in my containers. Um, and then having a runtime, like that gets really intense. >>Interesting. Yeah, yeah, yeah. And I, and jumping back a bit, um, yeah, uh, like you said, drum at the, at the base of it, it containers just a, a process with, with some, uh, Abra, pardon me, operating constructs wrapped around it and see groups, namespaces those types of things. But I think it's very important to, for our discussion right. Of, uh, developers really understanding that, that this is just the process, just like a normal process when I spin up my local bash in my term. Uh, and I'm just interacting with that. And a lot of the things we talk about are more for production runtimes for securing containers for isolating them locally. I don't, I don't know. I'll throw the question out to the panel. Is that really relevant to us locally? Right. Do we want to pull out all of those restrictions? What are the benefits of containers for development, right. And maybe that's a soft question, but I'd still love to hear your thoughts. Maybe I'll kick it over to you, Katie, would you, would you kick us off a little bit with that? >>I'll try. Um, so I think when, again, I was actually thinking of the previous answers because maybe, maybe I could do a transition here. So, interesting, interesting about containers, a piece of trivia, um, the secrets and namespaces have been within the Linux kernel since 2008, I think, which just like more than 10 years ago, hover containers become popular in the last years. So I think it's, it's the technology, but it's about the organization adopting this technology. So I think why it got more popular now is because it became the business differentiator organizations started to think, how can I deliver value to my customers as quickly as possible? So I think that there should be this kind of two lane, um, kind of progress is the technology, but it's at the same time organization and cultural now are actually essential for us to develop, uh, our applications locally. >>Again, I think when it's a single application, if you have just one component, maybe it's easier for you to kind of run it locally, have a very simple testing environment. Sufficient is a container necessary, probably not. However, I think it's more important when you're thinking to the bigger picture. When we have an architecture that has myriads of microservices at the basis, when it's something that you have to expose, for example, an API, or you have to consume an API, these are kind of things where you might need to think about a lightweight set up within the containers, only local environment to make sure that you have at least a similar, um, environment or a configuration to make sure that you test some of the expected behavior. Um, I think the, the real kind of test you start from the, the dev cluster will like the dev environment. >>And then like for, for you to go to staging and production, you will get more clear into what exactly that, um, um, configuration should be in the end. However, at the same time, again, it's, it's more about, um, kind of understanding why you continue to see this, the thing, like, I don't say that you definitely need containers at all times, but there are situations when you have like, again, multiple services and you need to replicate them. It's just the place to, to, to work with these kind of, um, setups. So, um, yeah, really depends on what you're trying to develop here. Nothing very specific, unfortunately, but get your product and your requirements are going to define what you're going to work with. >>Yeah, no, I think that's a great answer, right. I think one of the best answers in, in software engineering and engineering in general as well, it depends. Right. It's things are very specific when we start getting down to the details, but yeah, generally speaking, you know, um, I think containers are good for development, but yeah, it depends, right. It really depends. Is it helping you then? Great. If it's hindering you then, okay. Maybe think what's, what's the hindrance, right. And are containers the right solution. I agree. 110% and, >>And everything. I would like absurd this too as well. When we, again, we're talking about the development team and now we have this culture where we have the platform and infrastructure team, and then you have your engineering team separately, especially when the regulations are going to be segregated. So, um, it's quite important to understand that there might be a, uh, a level of up-skilling required. So pushing for someone to use containers, because this is the right way for you to develop your application might be not, uh, might not be the most efficient way to actually develop a product because you need to spend some time to make sure that the, the engineering team has the skills to do so. So I think it's, it's, again, going back to my answers here is like, truly be aware of how you're trying to develop how you actually collaborate and having that awareness of your platform can be quite helpful in developing your, uh, your publication, the more importantly, having less, um, maybe blockers pushing it to a production system. >>Yeah, yeah. A hundred percent. Yeah. The, uh, the cultural issue is, is, um, within the organization, right. Is a very interesting thing. And it, and I would submit that it's very hard from top down, right. Pushing down tools and processes down to the dev team, man, we'll just, we'll just rebel. It usually comes from the bottom up. Right. What's working for us, we're going to do right. And whether we do it in the shadows and don't let it know, or, or we've conformed, right. Yeah. A hundred percent. Um, interesting. I would like to think a little bit in the future, right? Like, let's say, I don't know, two, three years from now, if, if y'all could wave a and I'm from Texas. So I say y'all, uh, if you all could wave a magic wand, what, what, what would that bring about right. What, what would, what would be the best scenario? And, and we just don't have to say containers. Right. But, you know, what's the best development environment and I'm going to kick it over to you, Jacob. Cause I think you hinted at some of that with some hybrid type of stuff, but, uh, yeah. Implies, they need to keep you awake. You're, you're, you're, uh, almost on the other side of the world for me, but yeah, please. >>Um, I think, you know, it's, it's interesting because you have this technology that you've been, that's been brought from production, so it's not, um, necessarily like the right or the normal basis for development. So I think there's going to be some sort of realignment or renormalization in terms of, uh, you know, what the, what the basis and the abstractions that we're using on a daily basis are right. Like images and containers as they exist now are really designed for, um, for production use cases. And, and in terms of like, even even the ergonomics of opening a shell inside a container, I think is something that's, um, you know, not as polished or not as smooth as it could be because they've come from production. And so I think it's important, like not to, not to have people look at, look at the technology as it exists now and say like, okay, this is slightly rough around the edges, or it wasn't designed for this use case and think, oh, there's, you know, there's never any way I could use this for, for my development of workflows. >>I think it's, you know, it's something Docker's exploring now with, uh, with the, uh, dev containers, you know, it's, it's a new, and it's an experimental paradigm and it may not be what the final picture looks like. As, you know, you were saying, there's going to be kind of a baseline and you'll add features to that or iterate on that. Um, but I think that's, what's interesting about it, right? Cause it's, there's not a lot of things as developers that you get to play with that, um, that are sort of the new technology. Like if you're talking about things you're building to ship, you want to kind of use tried and true components that, you know, are gonna, that are going to be reliable. But I think containers are that interesting point where it's like, this is an established technology, but it's also being used in a way now that's completely different than what it was designed for. And, and, you know, as hackers, I think that's kind of an interesting opportunity to play with it, but I think, I think that's, what's going to happen is you're just going to see kind of those production, um, designed, uh, knobs kind of sanded down or redesigned for, for development. So that's kind of where I see it going. >>Yeah. Yeah. And I think that's what I was trying to hint out earlier is like, um, yeah, just because all these things are there, does it actually mean we need them locally? Right. Do they make sense? I, I agree. A hundred percent, uh, anybody else drawn? What are your thoughts around that? And then, and then, uh, I'll probably just ask all of you. I'd love to hear each of your thoughts of the future. >>I had a thought was maybe unrelated, but I was kind of wondering if we would see something on the side of like energy efficiency in some way. Um, and maybe it's just because I've been thinking a lot about like climate change and things like that recently, and trying to reduce like the, uh, the energy use energy use and things like that. Perhaps it's also because I recently got a new laptop, which on paper is super awesome, but in practice, as soon as you try to have like two slack tabs and a zoom call, you know, it's super fast, both for 30 seconds. And after 30 seconds, it blows its thermal budget and it's like slows down to a crawl. And I started to think, Hmm, maybe, you know, like before we, we, we were thinking about, okay, I don't have that much CPU available. So you have to be kind of mindful about that. >>And now I wonder how are we going to get in something similar to that, but where you try to save CPU cycles, not just because you don't have that many CPU cycles, but more because you know, that you can't go super fast for super long when you are on one of these like small laptops or tablets or phones, like you have this demo budget to take into account. And, um, I wonder if, and how like, is there something where goaltenders can do some things here? I guess it can be really interesting if they can do some the equivalent of like Docker top and Docker stats. And if I could see, like how much what's are these containers using, I can already do that with power top on Linux, for instance, like process by process. So I'm thinking I could see what's the power usage of, of some containers. Um, and I wonder if down the line, is this going to be something useful or is this just silly because we can just masquerade CPU usage for, for Watson and forget about it. >>Yeah. Yeah. It was super, super interesting, uh, perspective for sure. I'm going to shut up because I want to, I want to give, make sure I give Johannes and Katie time. W w what are your thoughts of the future around, let's just say, you know, container development in general, right? You want, you want to start absolutely. Oh, honest, Nate. Johns wants more time. I say, I'll try not to. Beneficiate >>Expensive here, but, um, so one of the things that we've we've touched upon earlier in the panel was multicloud strategy. And I was reading one of the data reports from it was about the concept of Kubernetes from gamer Townsville. But what is working for you to see there is that more and more organizations are thinking about multicloud strategy, which means that you need to develop an application or need an infrastructure or a component, which will allow you to run this application bead on a public cloud bead, like locally in a data center and so forth. And here, when it comes to this kind of, uh, maybe problems we come across open standards, this is where we require something, which will allow us to execute our application or to run our platform in different environments. So when you're thinking about the application or development of the application, one of the things that, um, came out in 2019 at was the Oakland. >>Um, I wish it was Kybella, which is a, um, um, an open application model based application, which allows you to describe the way you would like your service to be executed in different environments. It doesn't need to be well developed specifically for communities. However, the open application model is specialized. So specialized tries to cover multiple platforms. You will be able to execute your application anywhere you want it to. So I think that that's actually quite important because it completely obstructs what is happening underneath it, completely obstructs notions, such as containers, uh, or processes is just, I want this application and I want to have this kind of behavior is so example of, to scale in this conditions or to, um, to be exposed for these, uh, end points and so forth. And everything that I would like to mention here is that maybe this transcends again, the, uh, the logistics of the application development, but it definitely will impact the way we run our applications. >>So one of the biggest, well, one of the new trends that is kind of gaining momentum now has been around Plaza. And this is again, something which is trying to present what we have the on containers. Again, it's focusing on the, it's kind of a cyclical, um, uh, action movement that we have here. When we moved from the VMs to containers, it was smaller footprint. We want like better execution, one, this agnosticism of the platforms. We have the same thing happening here with Watson, but again, it consents a new, um, uh, kind of, well, it teaches in you, uh, in new climax here, where again, we shrink the footprint of the cluster. We have a better isolation of all the services. We have a better trend, like portability of how services and so forth. So there is a great potential out there. And again, like why I'm saying this is some of these technologies are gonna define the way we're gonna do our development of the application on our local environment. >>That's why it's important to kind of maybe have an eye there and maybe see if some of those principles of some of those technologies we can bring internally as well. And just this, like a, a final thought here, um, security has been mentioned as well. Um, I think it's something which has been, uh, at the forefront, especially when it comes to containers, uh, especially when it comes to enterprise organizations and those who are regulated, which I feel come very comfortable to run their application within a VM where you have the full isolation, you can do what we have complete control of what's happening inside that compute. So, um, again, security has been at the forefront at the moment. So I know it has mentioned in the panel before. I'd like to mention that we have the security white paper, which has been published. We have the software supply chain, white paper as well, which twice to figure out or define some of these good practices as well, again, which you can already apply from your development environment and then propagate them to production. So I'm just going to leave, uh, all of these. That's all. >>That's awesome. And yeah, well, while is very, very interesting. I saw the other day that, um, and I forget who it was, maybe, maybe all can remember, um, you know, running, running the node, um, engine inside of, you know, in Walzem inside of a browser. Right. And, uh, at first glance I said, well, we already have a JavaScript execution engine. Right. And it's kind of like Docker and Docker. So you have, uh, you know, you have the browser, then, then you have blossom and then you have a node, you know, a JavaScript runtime. And, and I didn't understand was while I was, um, you know, actually executing is JavaScript and it's not, but yeah, it's super interesting, super powerful. I always felt that the browser was, uh, Java's what write once run anywhere kind of solution, right. That never came about, they were thinking of set top, uh, TV boxes and stuff like that, which is interesting. >>I don't know, you'll some of the history of Java, but yeah. Wasm is, is very, I'm not sure how to correctly pronounce it, but yeah, it's extremely interesting because of the isolation in that boxing. Right. And running powerful languages that were used to inside of a more isolated environment. Right. And it's almost, um, yeah, it's kind of, I think I've mentioned it before that the containers inside of containers, right. Um, yeah. So Johannes, hopefully I gave you enough time. I delayed, I delayed as much as I can. My friend, you better, you better just kidding. I'm just kidding, please, please. >>It was by the way, stack let's and they worked together with Google and with Russell, um, developing the web containers, it's called there's, it's quite interesting. The research they're doing there. Yeah. Yeah. I mean, what we believe and I, I also believe is that, um, yeah, probably somebody is doing to death environments, what Docker did to servers and at least that good part. We hope that somebody will be us. Um, so what we mean by that is that, um, we think today we are still somehow emotionally attached to our dev environments. Right. We give them names, we massage them over time, which can also have its benefits, but it's, they're still pets in some way. Right. And, um, we believe that, um, environments in the future, um, will be treated similar like servers today as automated resources that you can just spin up and close down whenever you need them. >>Right. And, um, this trend essentially that you also see in serverless, if you look at what kind of Netlify is doing a bit with preview environments, what were sellers doing? Um, there, um, we believe will also arrive at, um, at Steph environments. It probably won't be there tomorrow. So it will take some time because if there's also, you know, emotion involved into, in that, in that transition, but ultimately really believe that, um, provisioning dev environments also in the cloud allows you to leverage the power of the cloud and to essentially build all that stuff that you need in order to work in advance. Right? So that's literally either command or a button. So either, I don't know, a command that spins up your local views code and SSH into, into a container, or you do it in a browser, um, will be the way that professional development teams will develop in the future. Probably let's see in our direction of document, we say it's 2000 to 23. Let's see if that holds true. >>Okay. Can we, can, we let's know. Okay. Let's just say let's have a friendly bet. I don't know that's going to be closed now, but, um, yeah, I agree. I, you know, it's my thought around is it, it's hard, right? Th these are hard. And what problems do you tackle first, right? Do you tackle the day, one of, uh, you know, of development, right. I joined a team, Hey, here's your machine? And you have Docker installed and there you go, pull, pull down your environment. Right. Is that necessarily just an image? You know, what, what exactly is that sure. Containers are involved. Right. But that's, I mean, you, you've probably all gone through it. You joined a team, new project, even open-source project, right there. There's a huge hurdle just to get everything configured, to get everything installed, to get it up and running, um, you know, set aside all understanding the code base. >>Cause that's a different issue. Right. But just getting everything running locally and to your point earlier, Jacob of around, uh, recreating, local production cues and environments and, you know, GPS or anything like that, right. Is extremely hard. You can't do a lot of that locally. Right. So I think that's one of the things I'd love to see tackled. And I think that's where we're tackling in dev environments, uh, with Docker, but then now how do you become productive? Right. And where do we go from there? And, uh, and I would love to see this kind of hybrid and you guys have been all been talking about it where I can, yes. I have it configured everything locally on my nice, you know, apple notebook. Right. And then, you know, I go with the family and we go on vacation. I don't want to drag this 16 inch, you know, Mac laptop with me. >>And I want to take my nice iPad with the magic keyboard and all the bang stuff. Right. And I just want to fire up and I pick up where I left off. Right. And I keep coding and environment feels, you know, as much as it can that I'm still working at backup my desktop. I think those, those are very interesting to me. And I think reproducing, uh, the production running runtime environments as close as possible, uh, when I develop my, I think that's extremely powerful, extremely powerful. I think that's one of the hardest things, right. It's it's, uh, you know, we used to say, we, you debug in production. Right. We would launch, right. We would do, uh, as much performance testing as possible. But until you flip that switch on a big, on a big site, that's where you really understand what is going to break. >>Right. Well, awesome. I think we're just about at time. I really, really appreciate everybody joining me. Um, it's been a pleasure talking to all of you. We have to do this again. If I, uh, hopefully, you know, I I'm in here in America and we seem to be doing okay with COVID, but I know around the world, others are not. So my heart goes out to them, but I would love to be able to get out of here and come see all of you and meet you in person, maybe break some bread together. But, um, again, it was a pleasure talking to you all, and I really appreciate you taking the time. Have a good evening. Cool. >>Thanks for having us. Thanks for joining us. Yes.
SUMMARY :
Um, if you come to the main page on the website and you do not see the chat, go ahead and click And I have been, uh, affiliated way if you'd asked me to make sure that, Glad to have you here. which is probably also the reason why you Peter reached out and invited me here. Can you tell everybody who you are and a little bit about yourself? So kind of, uh, how do we say same, same team, different company or something like that? Good to see you. bit more powerful hardware or uh, you know, maybe a software that I can't run locally. I really appreciate you all joining me Like if I go back to the, kind of the first, uh, you know, but in a container that you control from your browser and, and many other things So I guess another question is, you know, should we be developing So I think, you know, even if you have a super powerful computer, I think there's still value in, With, um, you know, and how do you do that? of view, you do not need to take care anymore about all the hassle around setups It includes essentially all the tools you need in order to be productive databases and so on. It might be too to, uh, har you know, to, to two grand of the word. much as possible the production or even the staging environment to make sure that when you deploy your application, I think there has been a lot of focus in the community to develop the tool, to actually give you the right tool to run you have in production, because there's going to define some of the structures with the tool and you're going to have internally, but what's your thoughts? So you know that like you're gonna have PRI iMacs out of my cold dead hands or something like that. And I think there is also something interesting to do here with you know, that like with their super nice IDE and everything is set up, but they feel kind of lost. And that makes me feel a little bit, you know, as this kind of old code for movies where So I think, you know, talking about, uh, dev environments that, that Docker's coming out with, Of, uh, of, you know, even just 10 microservices that are in different get repos boundary or, or, um, you know, a sub repo boundary. all of that stuff locally, or to have to like duplicate these, you know, and, of, um, you know, hybrid kind of environments. I think, you know, the vehicle that we use, I'm sitting outside, you know, the general thought around containers is isolation, that, that these are all, um, you know, these completely encapsulated environments that you can't interact with because because we have a question in the, in the chat around, what's the, you know, why, why containers now I have you know, you can have a container that's actually using the, um, the, um, So that gives it an entire, you know, wire speed access to the, to the network of the Um, but that's actually extremely convenient because, um, as soon as you And I think for folks, well, precisely when you want to do development in containers, um, yeah, uh, like you said, drum at the, at the base of it, it containers just a, So I think that there should be this kind of two Again, I think when it's a single application, if you have just one component, maybe it's easier for you to kind And then like for, for you to go to staging and production, you will get more clear into what exactly that, down to the details, but yeah, generally speaking, you know, um, So pushing for someone to use containers, because this is the right way for you to develop your application Cause I think you hinted at some of that with some hybrid type of stuff, but, uh, a shell inside a container, I think is something that's, um, you know, not as polished or I think it's, you know, it's something Docker's exploring now with, uh, with the, I'd love to hear each of your thoughts of the So you have to be kind of mindful cycles, but more because you know, that you can't go super fast for super long when let's just say, you know, container development in general, right? But what is working for you to see there is that more and more organizations way you would like your service to be executed in different environments. So one of the biggest, well, one of the new trends that is kind of gaining momentum now has been around Plaza. again, which you can already apply from your development environment and then propagate them to production. um, and I forget who it was, maybe, maybe all can remember, um, you know, So Johannes, hopefully I gave you enough time. as automated resources that you can just spin up and close down whenever really believe that, um, provisioning dev environments also in the cloud allows you to to get everything installed, to get it up and running, um, you know, set aside all in dev environments, uh, with Docker, but then now how do you become productive? It's it's, uh, you know, we used to say, we, you debug in production. But, um, again, it was a pleasure talking to you all, and I really appreciate you taking the time. Thanks for joining us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tristan | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
John | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Steve Mullaney | PERSON | 0.99+ |
Katie | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Charles | PERSON | 0.99+ |
Mike Dooley | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Maribel Lopez | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mike Wolf | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Merim | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Rossi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Chris Wegmann | PERSON | 0.99+ |
Whole Foods | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Chris Hoff | PERSON | 0.99+ |
Jamak Dagani | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Caterpillar | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Marianna Tessel | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Jerome | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lori MacVittie | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Seattle | LOCATION | 0.99+ |
10 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
Peter McKee | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
CISCO FUTURE CLOUD FULL V3
>>mhm, mm. All right. Mhm. Mhm, mm mm. Mhm. Yeah, mm. Mhm. Yeah, yeah. Mhm, mm. Okay. Mm. Yeah, Yeah. >>Mhm. Mhm. Yeah. Welcome to future cloud made possible by Cisco. My name is Dave Volonte and I'm your host. You know, the cloud is evolving like the universe is expanding at an accelerated pace. No longer is the cloud. Just a remote set of services, you know, somewhere up there. No, the cloud, it's extending to on premises. Data centers are reaching into the cloud through adjacent locations. Clouds are being connected together to each other and eventually they're gonna stretch to the edge and the far edge workloads, location latency, local laws and economics will define the value customers can extract from this new cloud model which unifies the operating experience independent of location. Cloud is moving rapidly from a spare capacity slash infrastructure resource to a platform for application innovation. Now, the challenge is how to make this new cloud simple, secure, agile and programmable. Oh and it has to be cloud agnostic. Now, the real opportunity for customers is to tap into a layer across clouds and data centers that abstracts the underlying complexity of the respective clouds and locations. And it's got to accommodate both mission critical workloads as well as general purpose applications across the spectrum cost, effectively enabling simplicity with minimal labor costs requires infrastructure i. E. Hardware, software, tooling, machine intelligence, AI and partnerships within an ecosystem. It's kind of accommodate a variety of application deployment models like serverless and containers and support for traditional work on VMS. By the way, it also requires a roadmap that will take us well into the next decade because the next 10 years they will not be like the last So why are we here? Well, the cube is covering Cisco's announcements today that connect next generation compute shared memory, intelligent networking and storage resource pools, bringing automation, visibility, application assurance and security to this new decentralized cloud. Now, of course in today's world you wouldn't be considered modern without supporting containers ai and operational tooling that is demanded by forward thinking practitioners. So sit back and enjoy the cubes, special coverage of Cisco's future cloud >>From around the globe. It's the Cube presenting future cloud one event, a world of opportunities brought to you by Cisco. >>We're here with Dejoy Pandey, a VP of emerging tech and incubation at Cisco. V. Joy. Good to see you. Welcome. >>Good to see you as well. Thank you Dave and pleasure to be here. >>So in 2020 we kind of had to redefine the notion of agility when it came to digital business or you know organizations, they had to rethink their concept of agility and business resilience. What are you seeing in terms of how companies are thinking about their operations in this sort of new abnormal context? >>Yeah, I think that's a great question. I think what what we're seeing is that pretty much the application is the center of the universe. And if you think about it, the application is actually driving brand recognition and the brand experience and the brand value. So the example I like to give is think about a banking app uh recovered that did everything that you would expect it to do. But if you wanted to withdraw cash from your bank you would actually have to go to the ATM and punch in some numbers and then look at your screen and go through a process and then finally withdraw cash. Think about what that would have, what what that would do in a post pandemic era where people are trying to go contact less. And so in a situation like this, the digitization efforts that all of these companies are going through and and the modernization of the automation is what is driving brand recognition, brand trust and brand experience. >>Yeah. So I was gonna ask you when I heard you say that, I was gonna say well, but hasn't it always been about the application, but it's different now, isn't it? So I wonder if you talk more about how the application is experience is changing. Yes. As a result of this new digital mandate. But how should organizations think about optimizing those experiences in this new world? >>Absolutely. And I think, yes, it's always been about the application, but it's becoming the center of the universe right now because all interactions with customers and consumers and even businesses are happening through that application. So if the application is unreliable or if the application is not available is untrusted insecure, uh, there's a problem. There's a problem with the brand, with the company and the trust that consumers and customers have with our company. So if you think about an application developer, the weight he or she is carrying on their shoulders is tremendous because you're thinking about rolling features quickly to be competitive. That's the only way to be competitive in this world. You need to think about availability and resiliency. Like you pointed out and experience, you need to think about security and trust. Am I as a customer or consumer willing to put my data in that application? So velocity, availability, Security and trust and all of that depends on the developer. So the experience, the security, the trust, the feature, velocity is what is driving the brand experience now. >>So are those two tensions that say agility and trust, you know, Zero Trust used to be a buzzword now it's a mandate. But are those two vectors counter posed? Can they be merged into one and not affect each other? Does the question makes sense? Right? Security usually handcuffs my speed. But how do you address that? >>Yeah that's a great question. And I think if you think about it today that's the way things are. And if you think about this developer all they want to do is run fast because they want to build those features out and they're going to pick and choose a piece and services that matter to them and build up their app and they want the complexities of the infrastructure and security and trust to be handled by somebody else is not that they don't care about it but they want that abstraction so that is handled by somebody else. And typically within an organization we've seen in the past where this friction between Netapp Sec ops I. T. Tops and and the cloud platform Teams and the developer on one side and these these frictions and these meetings and toil actually take a toll on the developer and that's why companies and apps and developers are not as agile as they would like to be. So I think but it doesn't have to be that way. So I think if there was something that would allow a developer to pick and choose, discover the apis that they would like to use connect those api is in a very simple manner and then be able to scale them out and be able to secure them and in fact not just secure them during the run time when it's deployed. We're right off the back when the fire up that I'd and start developing the application. Wouldn't that be nice? And as you do that, there is a smooth transition between that discovery connectivity and ease of consumption and security with the idea cops. Netapp psych ops teams and see source to ensure that they are not doing something that the organization won't allow them to do in a very seamless manner. >>I want to go back and talk about security but I want to add another complexity before we do that. So for a lot of organizations in the public cloud became a staple of keeping the lights on during the pandemic but it brings new complexities and differences in terms of latency security, which I want to come back to deployment models etcetera. So what are some of the specific networking challenges that you've seen with the cloud native architecture is how are you addressing those? >>Yeah. In fact, if you think about cloud, to me that is a that is a different way of seeing a distributed system. And if you think about a distributed system, what is at the center of the distributed system is the network. So my my favorite comment here is that the network is the wrong time for all distribute systems and modern applications. And that is true because if you think about where things are today, like you said, there's there's cloud assets that a developer might use in the banking example that I gave earlier. I mean if you want to build a contact less app so that you get verified, a customer gets verified on the app. They walk over to the ATM and they were broadcast without touching that ATM. In that kind of an example, you're touching the mobile Rus, let's say U S A P is you're touching cloud API is where the back end might sit. You're touching on primary PS maybe it's an oracle database or a mainframe even where transactional data exists. You're touching branch pipes were the team actually exists and the need for consistency when you withdraw cash and you're carrying all of this and in fact there might be customer data sitting in salesforce somewhere. So it's cloud API is a song premise branch. It's ass is mobile and you need to bring all of these things together and over time you will see more and more of these API is coming from various as providers. So it's not just cloud providers but saas providers that the developer has to use. And so this complexity is very, very real. And this complexity is across the wide open internet. So the application is built across this wide open internet. So the problems of discovery ability, the problems of being able to simply connect these apis and manage the data flow across these apis. The problems of consistency of policy and consumption because all of these areas have their own nuances and what they mean, what the arguments mean and what the A. P. I. Actually means. How do you make it consistent and easy for the developer? That is the networking problem. And that is a problem of building out this network, making traffic engineering easy, making policy easy, making scale out, scale down easy, all of that our networking problems. And so we are solving those problems uh Francisco. >>Yeah the internet is the new private network but it's not so private. So I want to go back to security. I often say that the security model of building a moat, you dig the moat, you get the hardened castle that's just outdated now that the queen is left her castle, I always say it's dangerous out there. And the point is you touched on this, it's it's a huge decentralized system and with distributed apps and data, that notion of perimeter security, it's just no longer valid. So I wonder if you could talk more about how you're thinking about this problem and you definitely address some of that in your earlier comments. But what are you specifically doing to address this and how do you see it evolving? >>Yeah, I mean, that's that's a very important point. I mean, I think if you think about again the wide open internet being the wrong time for all modern applications, what is perimeter security in this uh in this new world? I mean, it's to me it boils down to securing an API because again, going with that running example of this contact lists cash withdrawal feature for a bank, the ap wherever it's it's entre branch SAs cloud, IOS android doesn't matter that FBI is your new security perimeter. And the data object that is trying to access is also the new security perimeter. So if you can secure ap to ap communication and P two data object communication, you should be good. So that is the new frontier. But guess what software is buggy? Everybody's software not saying Cisco software, everybody's Softwares buggy. Uh software is buggy, humans are not reliable and so things mature, things change, things evolve over time. So there needs to be defense in depth. So you need to secure at the API layer had the data object layer, but you also need to secure at every layer below it so that you have good defense and depth if any layer in between is not working out properly. So for us that means ensuring ap to ap communication, not just during long time when the app has been deployed and is running, but during deployment and also during the development life cycle. So as soon as the developer launches an ID, they should be able to figure out that this api is security uses reputable, it has compliant, it is compliant to my to my organization's needs because it is hosted, let's say from Germany and my organization wants appears to be used only if they are being hosted out of Germany so compliance needs and and security needs and reputation. Is it available all the time? Is it secure? And being able to provide that feedback all the time between the security teams and the developer teams in a very seamless real time manner. Yes, again, that's something that we're trying to solve through some of the services that we're trying to produce in san Francisco. >>Yeah, I mean those that layered approach that you're talking about is critical because every layer has, you know, some vulnerability. And so you you've got to protect that with some depth in terms of thinking about security, how should we think about where where Cisco's primary value add is, I mean as parts of the interview has a great security business is growing business, Is it your intention to to to to add value across the entire value chain? I mean obviously you can't do everything so you've got a partner but so has the we think about Cisco's role over the next I'm thinking longer term over the over the next decade. >>Yeah, I mean I think so, we do come in with good strength from the runtime side of the house. So if you think about the security aspects that we haven't played today, uh there's a significant set of assets that we have around user security around around uh with with do and password less. We have significant assets in runtime security. I mean, the entire portfolio that Cisco brings to the table is around one time security, the secure X aspects around posture and policy that will bring to the table. And as you see, Cisco evolve over time, you will see us shifting left. I mean, I know it's an overused term, but that is where security is moving towards. And so that is where api security and data security are moving towards. So learning what we have during runtime because again, runtime is where you learn what's available and that's where you can apply all of the M. L. And I models to figure out what works what doesn't taking those learnings, Taking those catalogs, taking that reputation database and moving it into the deployment and development life cycle and making sure that that's part of that entire they have to deploy to runtime chain is what you will see. Cisco do overtime. >>That's fantastic phenomenal perspective video. Thanks for coming on the cube. Great to have you and look forward to having you again. >>Absolutely. Thank you >>in a moment. We'll talk hybrid cloud applications operations and potential gaps that need to be addressed with costume, Das and VJ Venugopal. You're watching the cube the global leader in high tech coverage. Mhm >>You were cloud. It isn't just a cloud. It's everything flowing through it. It's alive. Yeah, connecting users, applications, data and devices and whether it's cloud, native hybrid or multi cloud, it's more distributed than ever. One company takes you inside, giving you the visibility and the insight you need to take action. >>One company >>has the vision to understand it, all the experience, to securely connect at all on any platform in any environment. So you can work wherever work takes you in a cloud first world between your cloud and being cloud smart, there's a bridge. Cisco the bridge to possible. >>Okay. We're here with costume does, who is the Senior Vice President, General Manager of Cloud and compute at Cisco. And VJ Venugopal, who is the Senior Director for Product Management for cloud compute at Cisco. KTV. J. Good to see you guys welcome. >>Great to see you. Dave to be here. >>Katie, let's talk about cloud you And I last time we're face to face was in Barcelona where we love talking about cloud and I always say to people look, Cisco is not a hyper Scaler, but the big public cloud players, they're like giving you a gift. They spent almost actually over $100 billion last year on Capex. The big four. So you can build on that infrastructure. Cisco is all about hybrid cloud. So help us understand the strategy. There may be how you can leverage that build out and importantly what a customer is telling you they want out of hybrid cloud. >>Yeah, no that's that's that's a perfect question to start with. Dave. So yes. So the hybrid hyper scholars have invested heavily building out their assets. There's a great lot of innovation coming from that space. Um There's also a great innovation set of innovation coming from open source and and that's another source of uh a gift. In fact the I. T. Community. But when I look at my customers they're saying well how do I in the context of my business implement a strategy that takes into consideration everything that I have to manage um in terms of my contemporary work clothes, in terms of my legacy, in terms of everything my developer community wants to do on DEVOPS and really harnessed that innovation that's built in the public cloud, that built an open source that built internally to me, and that naturally leads them down the path of a hybrid cloud strategy. And Siskel's mission is to provide for that imperative, the simplest more power, more powerful platform to deliver hybrid cloud and that platform. Uh It's inter site we've been investing in. Inner side, it's a it's a SAS um service um inner side delivers to them that bridge between their estates of today that were closer today, the need for them to be guardians of enterprise grade resiliency with the agility uh that's needed for the future. The embracing of cloud. Native of new paradigms of deVOPS models, the embracing of innovation coming from public cloud and an open source and bridging those two is what inner side has been doing. That's kind of that's kind of the crux of our strategy. Of course we have the entire portfolio behind it to support any, any version of that, whether that is on prem in the cloud, hybrid, cloud, multi cloud and so forth. >>But but if I understand it correctly from what I heard earlier today, the inter site is really a linchpin of that strategy, is it not? >>It really is and may take a second to totally familiarize those who don't know inner side with what it is. We started building this platform quite a few years back and we we built a ground up to be an immensely scalable SAs, super simple hybrid cloud platform and it's a platform that provides a slew of service is inherently and then on top of that there are suites of services, the sweets of services that are tied to infrastructure, automation. Cisco, as well as Cisco partners. The streets of services that have nothing to do with Cisco um products from a hardware perspective. And it's got to do with more cloud orchestration and cloud native and inner side and its suite of services um continue to kind of increase in pace and velocity of delivery video. Just over the last two quarters we've announced a whole number of things will go a little bit deeper into some of those but they span everything from infrastructure automation to kubernetes and delivering community than service to workload optimization and having visibility into your cloud estate. How much it's costing into your on premise state into your work clothes and how they're performing. It's got integrations with other tooling with both Cisco Abdi uh as well as non Cisco um, assets and then and then it's got a whole slew of capabilities around orchestration because at the end of the day, the job of it is to deliver something that works and works at scale that you can monitor and make sure is resilient and that includes that. That includes a workflow and ability to say, you know, do this and do this and do this. Or it includes other ways of automation, like infrastructure as code and so forth. So it includes self service that so that expand that. But inside the world's simplest hybrid cloud platform, rapidly evolving rapidly delivering new services. And uh we'll talk about some more of those day. >>Great, thank you, Katie VJ. Let's bring you into the discussion. You guys recently made an announcement with the ASCIi corp. I was stoked because even though it seemed like a long time ago, pre covid, I mean in my predictions post, I said, ha, she was a name to watch our data partners. Et are you look at the survey data and they really have become mainstream? You know, particularly we think very important in the whole multi cloud discussion. And as well, they're attractive to customers. They have open source offerings. You can very easily experiment. Smaller organizations can take advantage. But if you want to upgrade to enterprise features like clustering or whatever, you can plug right in. Not a big complicated migration. So a very, very compelling story there. Why is this important? Why is this partnership important to Cisco's customers? Mhm. >>Absolutely. When the spot on every single thing that you said, let me just start by paraphrasing what ambition statement is in the cloud and computer group. Right ambition statement is to enable a cloud operating model for hybrid cloud. And what we mean by that is the ability to have extreme amounts of automation orchestration and observe ability across your hybrid cloud idea operations now. Uh So developers and applications team get a great amount of agility in public clouds and we're on a mission to bring that kind of agility and automation to the private cloud and to the data centers and inter site is a quickie platform and lynchpin to enable that kind of operations. Uh, Cloud like operations in the in the private clouds and the key uh As you rightly said, harsher car is the, you know, they were the inventors of the concept of infrastructure at school and in terra form, they have the world's number one infrastructure as code platform. So it became a natural partnership for Cisco to enter into a technology partnership with harsher card to integrate inter site with hardship cops, terra form to bring the benefits of infrastructure as code to the to hybrid cloud operations. And we've entered into a very tight integration and uh partnership where we allow developers devops teams and infrastructure or administrators to allow the use of infrastructure as code in a SAS delivered manner for both public and private club. So it's a very unique partnership and a unique integration that allows the benefits of cloud managed i E C. To be delivered to hybrid cloud operations. And we've been very happy and proud to be partnering with Russian government shutdown. >>Yeah, Terra form gets very high marks from customers. The a lot of value there. The inner side integration adds to that value. Let's stay on cloud native for a minute. We all talk about cloud native cady was sort of mentioning before you got the the core apps, uh you want to protect those, make sure their enterprise create but they gotta be cool as well for developers. You're connecting to other apps in the cloud or wherever. How are you guys thinking about this? Cloud native trend? What other movies are you making in this regard? >>I mean cloud native is there is one of the paramount I. D. Trends of today and we're seeing massive amounts of adoption of cloud native architecture in all modern applications. Now, Cloud Native has become synonymous with kubernetes these days and communities has emerged as a de facto cloud native platform for modern cloud native app development. Now, what Cisco has done is we have created a brand new SAs delivered kubernetes service that is integrated with inter site, we call it the inter site community service for A. Ks. And this just geared a little over one month ago. Now, what interstate kubernetes service does is it delivers a cloud managed and cloud delivered kubernetes service that can be deployed on any supported target infrastructure. It could be a Cisco infrastructure, it could be a third party infrastructure or it could even be public club. But think of it as kubernetes anywhere delivered as says, managed from inside. It's a very powerful capability that we've just released into inter site to enable the power of communities and clog native to be used to be used anywhere. But today we made a very important aspect because we are today announced the brand new Cisco service mess manager, the Cisco service mesh manager, which is available as an extension to the KS are doing decide basically we see service measures as being the future of networking right in the past we had layer to networking and layer three networking and now with service measures, application networking and layer seven networking is the next frontier of, of networking. But you need to think about networking for the application age very differently how it is managed, how it is deployed. It needs to be ready, developer friendly and developer centric. And so what we've done is we've built out an application networking strategy and built out the service match manager as a very simple way to deliver application networking through the consumers, like like developers and application teams. This is built on an acquisition that Cisco made recently of Banzai Cloud and we've taken the assets of Banzai Cloud and deliver the Cisco service mesh manager as an extension to KS. That brings the promise of future networking and modern networking to application and development gives >>God thank you. BJ. And so Katie, let's let's let's wrap this up. I mean, there was a lot in this announcement today, a lot of themes around openness, heterogeneity and a lot of functionality and value. Give us your final thoughts. >>Absolutely. So, couple of things to close on, first of all, um Inner side is the simplest, most powerful hybrid cloud platform out there. It enables that that cloud operating model that VJ talked about, but enables that across cloud. So it's sad, it's relatively easy to get into it and give it a spin so that I'd highly encouraged anybody who's not familiar with it to try it out and anybody who is familiar with it to look at it again, because they're probably services in there that you didn't notice or didn't know last time you looked at it because we're moving so fast. So that's the first thing. The second thing I close with is um, we've been talking about this bridge that's kind of bridging, bridging uh your your on prem your open source, your cloud estates. And it's so important to to make that mental leap because uh in past generation, we used to talk about integrating technologies together and then with public cloud, we started talking about move to public cloud, but it's really how do we integrate, how do we integrate all of that innovation that's coming from the hyper scale, is everything they're doing to innovate superfast, All of that innovation is coming from open source, all of that innovation that's coming from from companies around the world, including Cisco, How do we integrate that to deliver an outcome? Because at the end of the day, if you're a cloud of Steam, if you're an idea of Steam, your job is to deliver an outcome and our mission is to make it super simple for you to do that. That's the mission we're on and we're hoping that everybody that's excited as we are about how simple we made that. >>Great, thank you a lot in this announcement today, appreciate you guys coming back on and help us unpack you know, some of the details. Thank thanks so much. Great having you. >>Thank you >>Dave in a moment. We're gonna come back and talk about disruptive technologies and futures in the age of hybrid cloud with Vegas Rattana and James leach. You're watching the cube, the global leader in high tech coverage. >>What if your server box >>wasn't a box at >>all? What if it could do anything run anything? >>Be any box you >>need with massive scale precision and intelligence managed and optimized from the cloud integrated with all your clouds, private, public or hybrid. So you can build whatever you need today and tomorrow. The potential of this box is unlimited. Unstoppable unseen ever before. Unbox the future with Cisco UCS X series powered by inter site >>Cisco. >>The bridge to possible. Yeah >>we're here with Vegas Rattana who's the director of product management for Pcs at Cisco. And James Leach is the director of business development for U. C. S. At the Cisco as well. We're gonna talk about computing in the age of hybrid cloud. Welcome gentlemen. Great to see you. >>Thank you. >>Thank you because let's start with you and talk about a little bit about computing architectures. We know that they're evolving. They're supporting new data intensive and other workloads especially as high performance workload requirements. What's this guy's point of view on all this? I mean specifically interested in your thoughts on fabrics. I mean it's kind of your wheelhouse, you've got accelerators. What are the workloads that are driving these evolving technologies and how how is it impacting customers? What are you seeing? >>Sure. First of all, very excited to be here today. You're absolutely right. The pace of innovation and foundational platform ingredients have just been phenomenal in recent years. The fabric that's writers that drives the processing power, the Golden city all have been evolving just an amazing place and the peace will only pick up further. But ultimately it is all about applications and the way applications leverage those innovations. And we do see applications evolving quite rapidly. The new classes of applications are evolving to absorb those innovations and deliver much better business values. Very, very exciting time step. We're talking about the impact on the customers. Well, these innovations have helped them very positively. We do see significant challenges in the data center with the point product based approach of delivering these platforms, innovations to the applications. What has happened is uh, these innovations today are being packaged as point point products to meet the needs of a specific application and as you know, the different applications have no different needs. Some applications need more to abuse, others need more memory, yet others need, you know, more course, something different kinds of fabrics. As a result, if you walk into a data center today, it is very common to see many different point products in the data center. This creates a manageability challenge. Imagine the aspect of managing, you know, several different form factors want you to you purpose built servers. The variety of, you know, a blade form factor, you know, this reminds me of the situation we had before smartphones arrived. You remember the days when you when we used to have a GPS device for navigation system, a cool music device for listening to the music. A phone device for making a call camera for taking the photos right? And we were all excited about it. It's when a smart phones the right that we realized all those cool innovations could be delivered in a much simpler, much convenient and easy to consume through one device. And you know, I could uh, that could completely transform our experience. So we see the customers were benefiting from these innovations to have a way to consume those things in a much more simplistic way than they are able to go to that. >>And I like to look, it's always been about the applications. But to your point, the applications are now moving in a much faster pace. The the customer experience is expectation is way escalated. And when you combine all these, I love your analogy there because because when you combine all these capabilities, it allows us to develop new Applications, new capabilities, new customer experiences. So that's that I always say the next 10 years, they ain't gonna be like the last James Public Cloud obviously is heavily influencing compute design and and and customer operating models. You know, it's funny when the public cloud first hit the market, everyone we were swooning about low cost standard off the shelf servers in storage devices, but it quickly became obvious that customers needed more. So I wonder if you could comment on this. How are the trends that we've seen from the hyper scale, Is how are they filtering into on prem infrastructure and maybe, you know, maybe there's some differences there as well that you could address. >>Absolutely. So I'd say, first of all, quite frankly, you know, public cloud has completely changed the expectations of how our customers want to consume, compute, right? So customers, especially in a public cloud environment, they've gotten used to or, you know, come to accept that they should consume from the application out, right? They want a very application focused view, a services focused view of the world. They don't want to think about infrastructure, right? They want to think about their application, they wanna move outward, Right? So this means that the infrastructure basically has to meet the application where it lives. So what that means for us is that, you know, we're taking a different approach. We're we've decided that we're not going to chase this single pane of glass view of the world, which, frankly, our customers don't want, they don't want a single pane of glass. What they want is a single operating model. They want an operating model that's similar to what they can get at the public with the public cloud, but they wanted across all of their cloud options they wanted across private cloud across hybrid cloud options as well. So what that means is they don't want to just consume infrastructure services. They want all of their cloud services from this operating model. So that means that they may want to consume infrastructure services for automation Orchestration, but they also need kubernetes services. They also need virtualization services, They may need terror form workload optimization. All of these services have to be available, um, from within the operating model, a consistent operating model. Right? So it doesn't matter whether you're talking about private cloud, hybrid cloud anywhere where the application lives. It doesn't matter what matters is that we have a consistent model that we think about it from the application out. And frankly, I'd say this has been the stumbling block for private cloud. Private cloud is hard, right. This is why it hasn't been really solved yet. This is why we had to take a brand new approach. And frankly, it's why we're super excited about X series and inter site as that operating model that fits the hybrid cloud better than anything else we've seen >>is acute. First, first time technology vendor has ever said it's not about a single pane of glass because I've been hearing for decades, we're gonna deliver a single pane of glass is going to be seamless and it never happens. It's like a single version of the truth. It's aspirational and, and it's just not reality. So can we stay in the X series for a minute James? Uh, maybe in this context, but in the launch that we saw today was like a fire hose of announcements. So how does the X series fit into the strategy with inter site and hybrid cloud and this operating model that you're talking about? >>Right. So I think it goes hand in hand, right. Um the two pieces go together very well. So we have uh, you know, this idea of a single operating model that is definitely something that our customers demand, right? It's what we have to have, but at the same time we need to solve the problems of the cost was talking about before we need a single infrastructure to go along with that single operating model. So no longer do we need to have silos within the infrastructure that give us different operating models are different sets of benefits when you want infrastructure that can kind of do all of those configurations, all those applications. And then, you know, the operating model is very important because that's where we abstract the complexity that could come with just throwing all that technology at the infrastructure so that, you know, this is, you know, the way that we think about is the data center is not centered right? It's no longer centered applications live everywhere. Infrastructure lives everywhere. And you know, we need to have that consistent operating model but we need to do things within the infrastructure as well to take full advantage. Right? So we want all the sas benefits um, of a Ci CD model of, you know, the inter site can bring, we want all that that proactive recommendation engine with the power of A I behind it. We want the connected support experience went all of that. They want to do it across the single infrastructure and we think that that's how they tie together, that's why one or the other doesn't really solve the problem. But both together, that's why we're here. That's why we're super excited. >>So Vegas, I make you laugh a little bit when I was an analyst at I D C, I was deep in infrastructure and then when I left I was doing, I was working with application development heads and like you said, uh infrastructure, it was just a, you know, roadblock but but so the target speakers with Cisco announced UCS a decade ago, I totally missed it. I didn't understand it. I thought it was Cisco getting into the traditional server business and it wasn't until I dug in then I realized that your vision was really to transform infrastructure, deployment and management and change them all. I was like, okay, I got that wrong uh but but so let's talk about the the ecosystem and the joint development efforts that are going on there, X series, how does it fit into this, this converged infrastructure business that you've, you've built and grown with partners, you got storage partners like Netapp and Pure, you've got i SV partners in the ecosystem. We see cohesive, he has been a while since we we hung out with all these companies at the Cisco live hopefully next year, but tell us what's happening in that regard. >>Absolutely, I'm looking forward to seeing you in the Cisco live next year. You know, they have absolutely you brought up a very good point. You see this is about the ecosystem that it brings together, it's about making our customers bring up the entire infrastructure from the core foundational hardware all the way to the application level so that they can, you know, go off and running pretty quick. The converse infrastructure has been one of the corners 2.5 hour of the strategy, as you pointed out in the last decade. And and and I'm I'm very glad to share that converse infrastructure continues to be a very popular architecture for several enterprise applications. Seven today, in fact, it is the preferred architecture for mission critical applications where performance resiliency latency are the critical requirements there almost a de facto standards for large scale deployments of virtualized and business critical data bases and so forth with X series with our partnerships with our Stories partners. Those architectures will absolutely continue and will get better. But in addition as a hybrid cloud world, so we are now bringing in the benefits of canvas in infrastructure uh to the world of hybrid cloud will be supporting the hybrid cloud applications now with the CIA infrastructure that we have built together with our strong partnership with the Stories partners to deliver the same benefits to the new ways applications as well. >>Yeah, that's what customers want. They want that cloud operating model. Right, go ahead please. >>I was going to say, you know, the CIA model will continue to thrive. It will transition uh it will expand the use cases now for the new use cases that were beginning to, you know, say they've absolutely >>great thank you for that. And James uh have said earlier today, we heard this huge announcement, um a lot of lot of parts to it and we heard Katie talk about this initiative is it's really computing built for the next decade. I mean I like that because it shows some vision and you've got a road map that you've thought through the coming changes in workloads and infrastructure management and and some of the technology that you can take advantage of beyond just uh, you know, one or two product cycles. So, but I want to understand what you've done here specifically that you feel differentiates you from other competitive architectures in the industry. >>Sure. You know that's a great question. Number one. Number two, um I'm frankly a little bit concerned at times for for customers in general for our customers customers in general because if you look at what's in the market, right, these rinse and repeat systems that were effectively just rehashes of the same old design, right? That we've seen since before 2000 and nine when we brought you C. S to market these are what we're seeing over and over and over again. That's that's not really going to work anymore frankly. And I think that people are getting lulled into a false sense of security by seeing those things continually put in the market. We rethought this from the ground up because frankly future proofing starts now, right? If you're not doing it right today, future proofing isn't even on your radar because you're not even you're not even today proved. So we re thought the entire chassis, the entire architecture from the ground up. Okay. If you look at other vendors, if you look at other solutions in the market, what you'll see is things like management inside the chassis. That's a great example, daisy chaining them together >>like who >>needs that? Who wants that? Like that kind of complexity is first of all, it's ridiculous. Um, second of all, um, if you want to manage across clouds, you have to do it from the cloud, right. It's just common sense. You have to move management where it can have the scale and the scope that it needs to impact your entire domain, your world, which is much larger now than it was before. We're talking about true hybrid cloud here. Right. So we had to solve certain problems that existed in the traditional architecture. You know, I can't tell you how many times I heard you talk about the mid plane is a great example. You know, the mid plane and a chastity is a limiting factor. It limits us on how much we can connect or how much bandwidth we have available to the chassis. It limits us on air flow and other things. So how do you solve that problem? Simple. Just get rid of it. Like we just we took it out, right. It's not no longer a problem. We designed an architecture that doesn't need it. It doesn't rely on it. No forklift upgrades. So, as we start moving down the path of needing liquid cooling or maybe we need to take advantage of some new, high performance, low latency fabrics. We can do that with almost. No problem at all. Right, So, we don't have any forklift upgrades. Park your forklift on the side. You won't need it anymore because you can upgrade gradually. You can move along as technologies come into existence that maybe don't even exist. They they may not even be on our radar today to take advantage of. But I like to think of these technologies, they're really important to our customers. These are, you know, we can call them disruptive technologies. The reality is that we don't want to disrupt our customers with these technologies. We want to give them these technologies so they can go out and be disruptive themselves. Right? And this is the way that we've designed this from the ground up to be easy to consume and to take advantage of what we know about today and what's coming in the future that we may not even know about. So we think this is a way to give our customers that ultimate capability flexibility and and future proofing. >>I like I like that phrase True hybrid cloud. It's one that we've used for years and but to me this is all about that horizontal infrastructure that can support that vision of what true hybrid cloud is. You can support the mission critical applications. You can you can develop on the system and you can support a variety of workload. You're not locked into one narrow stovepipe and that does have legs, Vegas and James. Thanks so much for coming on the program. Great to see you. >>Yeah. Thank you. Thank you. >>When we return shortly thomas Shiva who leads Cisco's data center group will be here and thomas has some thoughts about the transformation of networking I. T. Teams. You don't wanna miss what he has to say. You're watching the cube. The global leader in high tech company. Okay, >>mm. Mhm, mm. Okay. Mhm. Yeah. Mhm. Yeah. >>Mhm. Yes. Yeah. Okay. We're here with thomas Shiva who is the Vice president of Product Management, A K A VP of all things data center, networking STN cloud. You name it in that category. Welcome thomas. Good to see you again. >>Hey Sam. Yes. Thanks for having me on. >>Yeah, it's our pleasure. Okay, let's get right into observe ability. When you think about observe ability, visibility, infrastructure monitoring problem resolution across the network. How does cloud change things? In other words, what are the challenges that networking teams are currently facing as they're moving to the cloud and trying to implement hybrid cloud? >>Yeah. Yeah, visibility as always is very, very important. And it's quite frankly, it's not just it's not just the networking team is actually the application team to write. And as you pointed out, the underlying impetus to what's going on here is the data center is where the data is. And I think we set us a couple years back and really what happens the applications are going to be deployed uh in different locations, right. Whether it's in a public cloud, whether it's on prayer, uh, and they are built differently right there, built as microservices, they might actually be distributed as well at the same application. And so what that really means is you need as an operator as well as actually a user better visibility. Where are my pieces and you need to be able to correlate between where the app is and what the underlying network is that is in place in these different locations. So you have actually a good knowledge while the app is running so fantastic or sometimes not. So I think that's that's really the problem statement. What what we're trying to go afterwards, observe ability. >>Okay, and let's double click on that. So a lot of customers tell me that you gotta stare at log files until your eyes bleed and you gotta bring in guys with lab coats who have phds to figure all this stuff out. So, so you just described, it's getting more complex, but at the same time you have to simplify things. So how how are you doing that, >>correct? So what we basically have done is we have this fantastic product that that is called 1000 Ice. And so what this does is basically as the name, which I think is a fantastic fantastic name. You have these sensors everywhere. Um, and you can have a good correlation on uh links between if I run from a site to aside from a site to a cloud, from a cloud to cloud and you basically can measure what is the performance of these links. And so what we're, what we're doing here is we're actually extending the footprint of these thousands agent. Right? Instead of just having uh inversion machine clouds, we are now embedding them with the Cisco network devices. Right? We announced this with the catalyst 9000 and we're extending this now to our 8000 catalyst product line for the for the SD were in products as well as to the data center products the next line. Um and so what you see is is, you know, half a saying, you have 1000 eyes, you get a million insights and you get a billion dollar of improvements uh for how your applications run. And this is really uh, the power of tying together the footprint of where the network is with the visibility, what is going on. So you actually know the application behavior that is attached to this network. >>I see. So okay. So as the cloud evolves and expands it connects your actually enabling 1000 eyes to go further, not just confined within a single data center location, but out to the network across clouds, et cetera, >>correct. Wherever the network is, you're going to have 1000 I sensor and you can't bring this together and you can quite frankly pick if you want to say, hey, I have my application in public cloud provider, a uh, domain one and I have another one domain to, I can't do monitor that link. I can also monitor have a user that has a campus location or branch location. I kind of put an agent there and then I can monitor the connectivity from that branch location all the way to the let's say corporations that data centre, our headquarter or to the cloud. And I can have these probes and just we have visibility and saying, hey, if there's a performance, I know where the issue is and then I obviously can use all the other foods that we have to address those. >>All right, let's talk about the cloud operating model. Everybody tells us it's really the change in the model that drives big numbers in terms of R. O. I. And I want you to maybe address how you're bringing automation and devops to this world of of hybrid and specifically how is Cisco enabling I. T. Organizations to move to a cloud operating model? Is that cloud definition expands? >>Yeah, no that's that's another interesting topic beyond the observe ability. So really, really what we're seeing and this is going on for uh I want to say a couple of years now, it's really this transition from operating infrastructure as a networking team more like a service like what you would expect from a cloud provider. Right? It's really around the network team offering services like a cloud provided us. And that's really what the meaning is of cloud operating model. Right? But this is infrastructure running your own data center where that's linking that infrastructure was whatever runs on the public club is operating and like a cloud service. And so we are on this journey for why? So one of the examples uh then we have removing some of the control software assets, the customers that they can deploy on prayer uh to uh an instance that they can deploy in a cloud provider and just busy, insane. She ate things there and then just run it that way. Right. And so the latest example for this is what we have our identity service engine that is now limited availability available on AWS and will become available in mid this year, both in Italy as unusual as a service. You can just go to market place, you can load it there and now you create, you can start running your policy control in a cloud, managing your access infrastructure in your data center, in your campus wherever you want to do it. And so that's just one example of how we see our customers network operations team taking advantage of a cloud operating model and basically employing their, their tools where they need them and when they need them. >>So what's the scope of, I hope I'm saying it right. Ice, right. I see. I think it's called ice. What's the scope of that like for instance, turn in effect my or even, you know, address simplify my security approach. >>Absolutely. That's now coming to what is the beauty of the product itself? Yes. What you can do is really is that there's a lot of people talking about else. How do I get to zero trust approach to networking? How do I get to a much more dynamic, flexible segmentation in my infrastructure. Again, whether this is only campus X as well as a data center and Ice help today, you can use this as a point to define your policies and then any connect from there. Right. In this particular case we would instant Ice in the cloud as a software load. You now can connect and say, hey, I want to manage and program my network infrastructure and my data center on my campus, going to the respective control over this DNA Center for campus or whether it is the A. C. I. Policy controller. And so yes, what you get as an effect out of this is a very elegant way to automatically manage in one place. What is my policy and then drive the right segmentation in your network infrastructure? >>zero. Trust that, you know, it was pre pandemic. It was kind of a buzzword. Now it's become a mandate. I wonder if we could talk about right. I mean I wonder if you talk about cloud native apps, you got all these developers that are working inside organizations. They're maintaining legacy apps. They're connecting their data to systems in the cloud there, sharing that data. I need these developers, they're rapidly advancing their skill sets. How is Cisco enabling its infrastructure to support this world of cloud? Native making infrastructure more responsive and agile for application developers? >>Yeah. So, you know, we're going to the top of his visibility, we talked about the operating model, how how our network operators actually want to use tools going forward. Now, the next step to this is it's not just the operator. How do they actually, where do they want to put these tools, how they, how they interact with these tools as well as quite frankly as how, let's say, a devops team on application team or Oclock team also wants to take advantage of the program ability of the underlying network. And this is where we're moving into this whole cloud native discussion, right? Which is really two angles, that is the cloud native way, how applications are being built. And then there is the cloud native way, how you interact with infrastructure. Right? And so what we have done is we're a putting in place the on ramps between clouds and then on top of it we're exposing for all these tools, a P I S that can be used in leverage by standard uh cloud tools or uh cloud native tools. Right. And one example or two examples we always have and again, we're on this journey for a while is both answerable uh script capabilities that exist from red hat as well as uh Ashitaka from capabilities that you can orchestrate across infrastructure to drive infrastructure, automation and what what really stands behind it is what either the networking operations team wants to do or even the ap team. They want to be able to describe the application as a code and then drive automatically or programmatically in situation of infrastructure needed for that application. And so what you see us doing is providing all these capability as an interface for all our network tools. Right. Whether it's this ice that I just mentioned, whether this is our D. C. And controllers in the data center, uh whether these are the controllers in the in the campus for all of those, we have cloud native interfaces. So operator or uh devops team can actually interact directly with that infrastructure the way they would do today with everything that lives in the cloud, with everything how they brought the application. >>This is key. You can't even have the conversation of op cloud operating model that includes and comprises on prem without programmable infrastructure. So that's that's very important. Last question, thomas our customers actually using this, they made the announcement today. There are there are there any examples of customers out there doing this? >>We do have a lot of customers out there that are moving down the past and using the D. D. Cisco high performance infrastructure, but also on the compute side as well as on an exercise one of the customers. Uh and this is like an interesting case. It's Rakuten uh record and is a large tackle provider, a mobile five G. Operator uh in Japan and expanding and is in different countries. Uh and so people something oh, cloud, you must be talking about the public cloud provider, the big the big three or four. But if you look at it, there's a lot of the tackle service providers are actually cloud providers as well and expanding very rapidly. And so we're actually very proud to work together with with Rakuten and help them building a high performance uh, data and infrastructure based on hard gig and actually phone a gig uh to drive their deployment to. It's a five G mobile cloud infrastructure, which is which is uh where the whole the whole world where traffic is going. And so it's really exciting to see this development and see the power of automation visibility uh together with the high performance infrastructure becoming reality and delivering actually services, >>you have some great points you're making there. Yes, you have the big four clouds, your enormous, but then you have a lot of actually quite large clouds. Telcos that are either approximate to those clouds or they're in places where those hyper scholars may not have a presence and building out their own infrastructure. So so that's a great case study uh thomas, hey, great having you on. Thanks so much for spending some time with us. >>Yeah, same here. I appreciate it. Thanks a lot. >>I'd like to thank Cisco and our guests today V Joy, Katie VJ, viscous James and thomas for all your insights into this evolving world of hybrid cloud, as we said at the top of the next decade will be defined by an entirely new set of rules. And it's quite possible things will evolve more quickly because the cloud is maturing and has paved the way for a new operating model where everything is delivered as a service, automation has become a mandate because we just can't keep throwing it labor at the problem anymore. And with a I so much more as possible in terms of driving operational efficiencies, simplicity and support of the workloads that are driving the digital transformation that we talk about all the time. This is Dave Volonte and I hope you've enjoyed today's program. Stay Safe, be well and we'll see you next time.
SUMMARY :
Yeah, mm. the challenge is how to make this new cloud simple, to you by Cisco. Good to see you. Good to see you as well. to digital business or you know organizations, they had to rethink their concept of agility and And if you think about it, the application is actually driving So I wonder if you talk more about how the application is experience is So if you think about an application developer, trust, you know, Zero Trust used to be a buzzword now it's a mandate. And I think if you think about it today that's the the public cloud became a staple of keeping the lights on during the pandemic but So the problems of discovery ability, the problems of being able to simply I often say that the security model of building a moat, you dig the moat, So that is the new frontier. And so you you've got to protect that with some I mean, the entire portfolio that Cisco brings to the Great to have you and look forward to having you again. Thank you gaps that need to be addressed with costume, Das and VJ Venugopal. One company takes you inside, giving you the visibility and the insight So you can work wherever work takes you in a cloud J. Good to see you guys welcome. Great to see you. but the big public cloud players, they're like giving you a gift. and really harnessed that innovation that's built in the public cloud, that built an open source that built internally the job of it is to deliver something that works and works at scale that you can monitor But if you want to upgrade to enterprise features like clustering or the key uh As you rightly said, harsher car is the, We all talk about cloud native cady was sort of mentioning before you got the the core the power of communities and clog native to be used to be used anywhere. and a lot of functionality and value. outcome and our mission is to make it super simple for you to do that. you know, some of the details. and futures in the age of hybrid cloud with Vegas Rattana and James leach. So you can build whatever you need today The bridge to possible. And James Leach is the director of business development for U. C. S. At the Cisco as well. Thank you because let's start with you and talk about a little bit about computing architectures. to meet the needs of a specific application and as you know, the different applications have And when you combine all these, I love your analogy there because model that fits the hybrid cloud better than anything else we've seen So how does the X series fit into the strategy So we have uh, you know, this idea of a single operating model that is definitely something it was just a, you know, roadblock but but so the target speakers has been one of the corners 2.5 hour of the strategy, as you pointed out in the last decade. Yeah, that's what customers want. I was going to say, you know, the CIA model will continue to thrive. and and some of the technology that you can take advantage of beyond just uh, 2000 and nine when we brought you C. S to market these are what we're seeing over and over and over again. can have the scale and the scope that it needs to impact your entire domain, on the system and you can support a variety of workload. Thank you. You don't wanna miss what he has to say. Yeah. Good to see you again. When you think about observe ability, And it's quite frankly, it's not just it's not just the networking team is actually the application team to write. So a lot of customers tell me that you a site to aside from a site to a cloud, from a cloud to cloud and you basically can measure what is the performance So as the cloud evolves and expands it connects your and you can quite frankly pick if you want to say, hey, I have my application in public cloud that drives big numbers in terms of R. O. I. And I want you to You can just go to market place, you can load it there and even, you know, address simplify my security approach. And so yes, what you get as an effect I mean I wonder if you talk And so what you see us doing is providing all these capability You can't even have the conversation of op cloud operating model that includes and comprises And so it's really exciting to see this development and So so that's a great case study uh thomas, hey, great having you on. I appreciate it. that are driving the digital transformation that we talk about all the time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
Japan | LOCATION | 0.99+ |
Katie | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
san Francisco | LOCATION | 0.99+ |
Sam | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
thomas | PERSON | 0.99+ |
two pieces | QUANTITY | 0.99+ |
1000 eyes | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
Dejoy Pandey | PERSON | 0.99+ |
thomas Shiva | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
VJ Venugopal | PERSON | 0.99+ |
two vectors | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
James Leach | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
single | QUANTITY | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
mid this year | DATE | 0.99+ |
next year | DATE | 0.99+ |
ASCIi | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Steam | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
2.5 hour | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
two angles | QUANTITY | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
first thing | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
1000 | QUANTITY | 0.99+ |
Netapp | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Vegas Rattana | ORGANIZATION | 0.99+ |
two tensions | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
Thomas Scheibe, Cisco | Cisco Future Cloud
>>From around the globe. It's the cube present a future cloud one event, a world of opportunities brought to you by Cisco. >>Okay. We're here with Thomas Shabbat. Who's the vice president of product management, AKA VP of all things, data center, networking, SDN cloud, you name it in that category. Welcome Thomas. Good to see you again. >>Hey Sam. Yes. Thanks for having me on. >>Yeah, it's our pleasure. Okay. Let's get right into observability. When you think about observability visibility, infrastructure monitoring, problem resolution across the network, how does cloud change thing? In other words, what are the challenges that networking teams are currently facing as they're moving to the cloud and trying to implement hybrid cloud? >>Yeah. Yeah. Uh, visibility as always is very, very important. And it's perfect. It's not just, it's not just the network team is actually the application team too. Right. And as you pointed out, the, the underlying impetus to what's going on here is the data center is wherever the data is. And I think we set as a couple of years back and really what happens, the, the applications are going to be deployed, uh, in different locations, right? Whether it's in a public cloud, whether it's on prem, uh, and they're built differently, right? They are built as microservices. They might actually be distributed as well at the same application. And so what that really means is you need as an operator as well as actually a user, a bit of visibility where I, my pieces and you need to be able to correlate between where the Apres and what the underlying network is. That is a place at these different locations. So you have actually a good knowledge why the app is running so fantastic or sometimes not. So I think that's, that's really the problem statement, uh, what, what we tried to go after was the observability. >>Okay. Let's double click on that. So, so a lot of customers telling me that you got to stare at log files until your eyes bleed. And then you've got to bring in guys with lab coats who have PhDs to figure all this stuff out. So you just described, it's getting more complex, but at the same time, you have to simplify things. So how are you doing that? >>Correct. So what we basically have done is we have this fantastic product that is called thousand eyes. And so what does DAS is basically as the name, which I think is a fantastic, uh, fantastic name. You have these sensors everywhere. Um, and you can have a good correlation on, uh, links between if I run a, a site to a site from a site to a cloud, from a cloud to cloud, and you basically can measure what is the performance of these links? And so what we're, what we're doing here is we're actually extending the footprint of these thousand eyes agent, right. Instead of just having them, uh, in Virgin material clouds, we are now embedding them with the Cisco network devices, right. We announced this was the catalyst of 9,000. And we're extending this now to our, um, uh, 8,000 catalyst product line for the, for the sun products, as well as to the data center products, the next line. Um, and so what you see is, is there a half a thing you have sounds nice, you get a million insights and you get a billion dollar off improvements, uh, for how your applications run. And this is really, um, the, the power of tying together, the footprint of what a network is with the visibility, what is going on. So you actually know the application behavior that is attached to this network. >>I see. So, okay. So as the cloud evolves, it expands, it connects, you're actually enabling thousand eyes to go further, not just confined within a single data center location, but out to the network across clouds, et cetera, >>Correct. Wherever the network is, you're going to have a thousand eyes sensor and you can bring this together and you can quite frankly pick, if you want to say, Hey, I have my application in public cloud provider, a, uh, domain one, and I have another one domain, two, I can do monitor that link. I can also monitor, I have a user that has a campus location or branch location. I kind of put an agent there and then I can monitor the connectivity from that branch location, all the way to the let's say, corporations, that data center or headquarter, or to the cloud. And I can have these probes and just to be, have visibility and saying, Hey, if there's a performance, I know where the issue is. And then I obviously can use all the other sorts that we have to address those. >>All right, let's talk about the cloud operating model. Everybody tells us that, you know, it's, it's really the change in the model that drives big numbers in terms of ROI. And I want you to maybe address how you're bringing automation and dev ops to this world of, of hybrid and specifically, how is Cisco enabling it organizations to move to a cloud operating model as that cloud definition expands? >>Yeah, no, that's, that's another interesting topic beyond the observability. So really, really what we're seeing. And this has gone on for, uh, I want to say couple of years now, it's really this transition from, uh, operating infrastructure as a network and team more like a service, like what you would expect from a cloud provider, right? This is really around the network team, offering services like a cloud provided us. And that's really what the meaning is of cloud operating model, right? Where this is infrastructure running in your own data center, whether that's linking that infrastructure was whatever runs on the public cloud is operating at like a cloud service. And so we are on this journey for a while. So one of the examples, um, that we have removing some of the control software assets that customers today can deploy on prem, uh, to, uh, an instance that they can deploy in a, in a cloud provider and just basically instantiate saying, stay, and then just run it that way. >>Right? And so the latest example for this is what we have our identity service engine that is now unlimited availability available on AWS. And we will become available mid this year. Also data, we, as a visual, as a service, you can just go to marketplace, you can load it there and now increase. You can start running your policy control in a cloud, managing your X's infrastructure in your data center and your, uh, wherever you want to do it. And so that's just one example of how we see, uh, our customers' network operations team taking advantage of a cloud operating model, or basically deploying their, their tools where they need them and when they need them. So >>What's the scope of, I hope I'm saying it right, ice, right. ISC. I think they call it ice. What's the scope of that? Like for instance, 10 an effect my, or even, you know, address simplify my security approach. >>Absolutely. That's now coming to, what is the beauty of the product itself? Yes. Uh, what you can do is really is like, there's a lot of people talking, what I, how do I get to a zero trust approach to networking? How do I get to a much more dynamic, flexible segmentation in my infrastructure, again, whether this is on only campus X, as well as the data center and ice helps you there, you can use this as a point to, to find your policies and then any connect from there, right? In this particular case, we would instead ice in a cloud as a software, uh, load you now can connect and say, Hey, I want to manage and program my network infrastructure and my data center, or my campus going to the respect of controller with it's DNA center for campus, or whether does this, the, uh, ACI policy controller. And so, yes, what'd you get as an effect out of this is a very elegant way to automatically manage in one place. What does my policy, and then drive the right segmentation in your network infrastructure. Okay. >>Zero trust. It was pre pandemic. It was kind of a buzzword. Now it's become a mandate. I, I wonder if we could talk about yet, right. I mean, so I wonder, could talk about cloud native apps. Uh, you got all these developers that are working inside organizations, they're maintaining legacy apps, they're connecting their data to systems in the cloud. They're sharing that data. These developers they're rapidly advancing their skillsets. How is Cisco enabling its infrastructure to support this world of cloud native making infrastructure more responsive and agile for application developers? >>Yeah. So you were going to, the talk we saw was the visibility. We talked about the operating model, how our network operates, actually want to use tools going forward. Now the next step to visits, it's not just the operator. How do they actually, where do they want to put these tools? All they, how they interact with this tools as well as quite frankly, is how let's say a dev ops team on application team or a cloud team also wants to take advantage off the programmability of the underlying network. And this is where we moving into this whole cloud native discussion, right. Which has really two angles to, is the cloud native way, how applications are being built. And then there is the cloud native way, how you interact with infrastructure, right? And so what we have done as we're putting in place, the on-ramps between clouds, uh, and then on top of it, we're exposing for all these tools, API APIs that can be used and leveraged by standard cloud tools or, uh, uh, cloud native tools, right? >>And one example or two examples we always have. And again, we're on this journey for a while is, uh, both Ansible, uh, script capabilities, uh, that access from red hat, as well as, uh, Hashi Terraform capabilities that you can orchestrate across infrastructure to drive infrastructure automation. And what, what really stands behind it is what either the networking operations team wants to do, or even the app team. They want to be able to describe the application as a code and then drive automatically or programmatically in sedation of infrastructure needed for that application. And so what you see us doing is providing all these, uh, capability as an interface for all our network tools, right? Whether this is ice. What I just mentioned, whether this is our, uh, DCN controllers in the data center, uh, whether these are the controllers in the, uh, in the campus for all of those, we have cloud native interfaces. So, uh, operator or a dev ops team can actually interact directly with that infrastructure the way they would do today with everything that lives in the cloud, or was everything, how they built the application, >>You can't even have the conversation of, of op cloud operating model that includes and comprises on-prem without programmable infrastructure. So that's, that's very important. Last question, Thomas are customers actually using this? They made the announcement today. Are there any examples of customers out there doing this? >>We, we do have a lot of customers out there, um, that are moving down a path and using the D D Cisco high-performance infrastructure, also on the compute side, as well as on the next site. Uh, one of the customers, uh, and this is like an interesting case, is the Rakuten, uh, record in is a large type of provider, um, uh, mobile 5g operator, uh, in Japan and expanding and as in different countries. Uh, and so people, something, Oh, cloud, you must be talking about the public cloud provider, the big, the big three or four. Uh, but if you look at it as a lot of the tackles service providers are actually cloud providers as well and expanding very rapidly. And so we're actually very, um, proud to work together was, was Rakuten and in help them building a high performance, uh, data center infrastructure based on how they gig and actually phone a gig, uh, to drive their deployment to it's a 5g mobile cloud infrastructure, which is, which is, um, where the whole, the whole world where traffic is going. And so it's really exciting to see these development and see the power of automation, visibility, uh, together with the high performance infrastructure, becoming reality and delivering actually, uh, services. Yes. >>Some great points you're making there, but yes, you have the big four clouds are enormous, but then you have a lot of actually quite large clouds, telcos that are either proximate to those clouds or they're in places where those hyperscalers may not have a presence and building out their own infrastructure. So, so that's a great case study, uh, Thomas, Hey, great. Having you on. Thanks so much for spending some time with us. >>Yeah. The same here. I appreciate it. Thanks a lot. >>Thank you for watching everybody. This is Dave Volante for the cube, the leader in tech event coverage.
SUMMARY :
you by Cisco. Good to see you again. When you think about observability And so what that really means is you need it's getting more complex, but at the same time, you have to simplify things. and so what you see is, is there a half a thing you have sounds nice, you get a million insights So as the cloud evolves, it expands, it connects, And I can have these probes and just to be, have visibility and saying, Hey, if there's a performance, And I want you to And this has gone on for, uh, I want to say couple of years now, And so the latest example for this is what we have our identity service engine that you know, address simplify my security approach. And so, yes, what'd you get as an effect out of this is a very elegant Uh, you got all these developers that are working inside organizations, And then there is the cloud native way, how you interact with infrastructure, And so what you see You can't even have the conversation of, of op cloud operating model that includes and comprises And so it's really exciting to see these development and see the power of automation, visibility, so that's a great case study, uh, Thomas, Hey, great. I appreciate it. Thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Sam | PERSON | 0.99+ |
Thomas Scheibe | PERSON | 0.99+ |
Thomas Shabbat | PERSON | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
two examples | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
mid this year | DATE | 0.99+ |
one example | QUANTITY | 0.99+ |
two | QUANTITY | 0.98+ |
ACI | ORGANIZATION | 0.98+ |
one place | QUANTITY | 0.98+ |
two angles | QUANTITY | 0.98+ |
8,000 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
9,000 | QUANTITY | 0.96+ |
four | QUANTITY | 0.96+ |
pandemic | EVENT | 0.94+ |
three | QUANTITY | 0.87+ |
thousand eyes | QUANTITY | 0.86+ |
Virgin | ORGANIZATION | 0.85+ |
zero trust | QUANTITY | 0.85+ |
Zero trust | QUANTITY | 0.84+ |
billion dollar | QUANTITY | 0.84+ |
couple of years back | DATE | 0.82+ |
Cisco Future Cloud | ORGANIZATION | 0.81+ |
10 | QUANTITY | 0.75+ |
one domain | QUANTITY | 0.74+ |
a million insights | QUANTITY | 0.73+ |
double | QUANTITY | 0.73+ |
one event | QUANTITY | 0.7+ |
single data center | QUANTITY | 0.7+ |
half | QUANTITY | 0.65+ |
Hashi | TITLE | 0.61+ |
couple | QUANTITY | 0.53+ |
years | QUANTITY | 0.48+ |
Apres | ORGANIZATION | 0.44+ |
thousand | QUANTITY | 0.41+ |
Breaking Analysis: Why Apple Could be the Key to Intel's Future
>> From theCUBE studios in Palo Alto, in Boston bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante >> The latest Arm Neoverse announcement further cements our opinion that it's architecture business model and ecosystem execution are defining a new era of computing and leaving Intel in it's dust. We believe the company and its partners have at least a two year lead on Intel and are currently in a far better position to capitalize on a major waves that are driving the technology industry and its innovation. To compete our view is that Intel needs a new strategy. Now, Pat Gelsinger is bringing that but they also need financial support from the US and the EU governments. Pat Gelsinger was just noted as asking or requesting from the EU government $9 billion, sorry, 8 billion euros in financial support. And very importantly, Intel needs a volume for its new Foundry business. And that is where Apple could be a key. Hello, everyone. And welcome to this week's weekly bond Cube insights powered by ETR. In this breaking analysis will explain why Apple could be the key to saving Intel and America's semiconductor industry leadership. We'll also further explore our scenario of the evolution of computing and what will happen to Intel if it can't catch up. Here's a hint it's not pretty. Let's start by looking at some of the key assumptions that we've made that are informing our scenarios. We've pointed out many times that we believe Arm wafer volumes are approaching 10 times those of x86 wafers. This means that manufacturers of Arm chips have a significant cost advantage over Intel. We've covered that extensively, but we repeat it because when we see news reports and analysis and print it's not a factor that anybody's highlighting. And this is probably the most important issue that Intel faces. And it's why we feel that Apple could be Intel's savior. We'll come back to that. We've projected that the chip shortage will last no less than three years, perhaps even longer. As we reported in a recent breaking analysis. Well, Moore's law is waning. The result of Moore's law, I.e the doubling of processor performance every 18 to 24 months is actually accelerating. We've observed and continue to project a quadrupling of performance every two years, breaking historical norms. Arm is attacking the enterprise and the data center. We see hyperscalers as the tip of their entry spear. AWS's graviton chip is the best example. Amazon and other cloud vendors that have engineering and software capabilities are making Arm-based chips capable of running general purpose applications. This is a huge threat to x86. And if Intel doesn't quickly we believe Arm will gain a 50% share of an enterprise semiconductor spend by 2030. We see the definition of Cloud expanding. Cloud is no longer a remote set of services, in the cloud, rather it's expanding to the edge where the edge could be a data center, a data closet, or a true edge device or system. And Arm is by far in our view in the best position to support the new workloads and computing models that are emerging as a result. Finally geopolitical forces are at play here. We believe the U S government will do, or at least should do everything possible to ensure that Intel and the U S chip industry regain its leadership position in the semiconductor business. If they don't the U S and Intel could fade to irrelevance. Let's look at this last point and make some comments on that. Here's a map of the South China sea in a way off in the Pacific we've superimposed a little pie chart. And we asked ourselves if you had a hundred points of strategic value to allocate, how much would you put in the semiconductor manufacturing bucket and how much would go to design? And our conclusion was 50, 50. Now it used to be because of Intel's dominance with x86 and its volume that the United States was number one in both strategic areas. But today that orange slice of the pie is dominated by TSMC. Thanks to Arm volumes. Now we've reported extensively on this and we don't want to dwell on it for too long but on all accounts cost, technology, volume. TSMC is the clear leader here. China's president Xi has a stated goal of unifying Taiwan by China's Centennial in 2049, will this tiny Island nation which dominates a critical part of the strategic semiconductor pie, go the way of Hong Kong and be subsumed into China. Well, military experts say it was very hard for China to take Taiwan by force, without heavy losses and some serious international repercussions. The US's military presence in the Philippines and Okinawa and Guam combined with support from Japan and South Korea would make it even more difficult. And certainly the Taiwanese people you would think would prefer their independence. But Taiwanese leadership, it ebbs and flows between those hardliners who really want to separate and want independence and those that are more sympathetic to China. Could China for example, use cyber warfare to over time control the narrative in Taiwan. Remember if you control the narrative you can control the meme. If you can crawl the meme you control the idea. If you control the idea, you control the belief system. And if you control the belief system you control the population without firing a shot. So is it possible that over the next 25 years China could weaponize propaganda and social media to reach its objectives with Taiwan? Maybe it's a long shot but if you're a senior strategist in the U S government would you want to leave that to chance? We don't think so. Let's park that for now and double click on one of our key findings. And that is the pace of semiconductor performance gains. As we first reported a few weeks ago. Well, Moore's law is moderating the outlook for cheap dense and efficient processing power has never been better. This slideshows two simple log lines. One is the traditional Moore's law curve. That's the one at the bottom. And the other is the current pace of system performance improvement that we're seeing measured in trillions of operations per second. Now, if you calculate the historical annual rate of processor performance improvement that we saw with x86, the math comes out to around 40% improvement per year. Now that rate is slowing. It's now down to around 30% annually. So we're not quite doubling every 24 months anymore with x86 and that's why people say Moore's law is dead. But if you look at the (indistinct) effects of packaging CPU's, GPU's, NPUs accelerators, DSPs and all the alternative processing power you can find in SOC system on chip and eventually system on package it's growing at more than a hundred percent per annum. And this means that the processing power is now quadrupling every 24 months. That's impressive. And the reason we're here is Arm. Arm has redefined the core process of model for a new era of computing. Arm made an announcement last week which really recycle some old content from last September, but it also put forth new proof points on adoption and performance. Arm laid out three components and its announcement. The first was Neoverse version one which is all about extending vector performance. This is critical for high performance computing HPC which at one point you thought that was a niche but it is the AI platform. AI workloads are not a niche. Second Arm announced the Neoverse and two platform based on the recently introduced Arm V9. We talked about that a lot in one of our earlier Breaking Analysis. This is going to performance boost of around 40%. Now the third was, it was called CMN-700 Arm maybe needs to work on some of its names, but Arm said this is the industry's most advanced mesh interconnect. This is the glue for the V1 and the N2 platforms. The importance is it allows for more efficient use and sharing of memory resources across components of the system package. We talked about this extensively in previous episodes the importance of that capability. Now let's share with you this wheel diagram underscores the completeness of the Arm platform. Arms approach is to enable flexibility across an open ecosystem, allowing for value add at many levels. Arm has built the architecture in design and allows an open ecosystem to provide the value added software. Now, very importantly, Arm has created the standards and specifications by which they can with certainty, certify that the Foundry can make the chips to a high quality standard, and importantly that all the applications are going to run properly. In other words, if you design an application, it will work across the ecosystem and maintain backwards compatibility with previous generations, like Intel has done for years but Arm as we'll see next is positioning not only for existing workloads but also the emerging high growth applications. To (indistinct) here's the Arm total available market as we see it, we think the end market spending value of just the chips going into these areas is $600 billion today. And it's going to grow to 1 trillion by 2030. In other words, we're allocating the value of the end market spend in these sectors to the marked up value of the Silicon as a percentage of the total spend. It's enormous. So the big areas are Hyperscale Clouds which we think is around 20% of this TAM and the HPC and AI workloads, which account for about 35% and the Edge will ultimately be the largest of all probably capturing 45%. And these are rough estimates and they'll ebb and flow and there's obviously some overlap but the bottom line is the market is huge and growing very rapidly. And you see that little red highlighted area that's enterprise IT. Traditional IT and that's the x86 market in context. So it's relatively small. What's happening is we're seeing a number of traditional IT vendors, packaging x86 boxes throwing them over the fence and saying, we're going after the Edge. And what they're doing is saying, okay the edge is this aggregation point for all these end point devices. We think the real opportunity at the Edge is for AI inferencing. That, that is where most of the activity and most of the spending is going to be. And we think Arm is going to dominate that market. And this brings up another challenge for Intel. So we've made the point a zillion times that PC volumes peaked in 2011. And we saw that as problematic for Intel for the cost reasons that we've beat into your head. And lo and behold PC volumes, they actually grew last year thanks to COVID and we'll continue to grow it seems for a year or so. Here's some ETR data that underscores that fact. This chart shows the net score. Remember that's spending momentum it's the breakdown for Dell's laptop business. The green means spending is accelerating and the red is decelerating. And the blue line is net score that spending momentum. And the trend is up and to the right now, as we've said this is great news for Dell and HP and Lenovo and Apple for its laptops, all the laptops sellers but it's not necessarily great news for Intel. Why? I mean, it's okay. But what it does is it shifts Intel's product mix toward lower margin, PC chips and it squeezes Intel's gross margins. So the CFO has to explain that margin contraction to wall street. Imagine that the business that got Intel to its monopoly status is growing faster than the high margin server business. And that's pulling margins down. So as we said, Intel is fighting a war on multiple fronts. It's battling AMD in the core x86 business both PCs and servers. It's watching Arm mop up in mobile. It's trying to figure out how to reinvent itself and change its culture to allow more flexibility into its designs. And it's spinning up a Foundry business to compete with TSMC. So it's got to fund all this while at the same time propping up at stock with buybacks Intel last summer announced that it was accelerating it's $10 billion stock buyback program, $10 billion. Buy stock back, or build a Foundry which do you think is more important for the future of Intel and the us semiconductor industry? So Intel, it's got to protect its past while building his future and placating wall street all at the same time. And here's where it gets even more dicey. Intel's got to protect its high-end x86 business. It is the cash cow and funds their operation. Who's Intel's biggest customer Dell, HP, Facebook, Google Amazon? Well, let's just say Amazon is a big customer. Can we agree on that? And we know AWS is biggest revenue generator is EC2. And EC2 was powered by microprocessors made from Intel and others. We found this slide in the Arm Neoverse deck and it caught our attention. The data comes from a data platform called lifter insights. The charts show, the rapid growth of AWS is graviton chips which are they're custom designed chips based on Arm of course. The blue is that graviton and the black vendor A presumably is Intel and the gray is assumed to be AMD. The eye popper is the 2020 pie chart. The instant deployments, nearly 50% are graviton. So if you're Pat Gelsinger, you better be all over AWS. You don't want to lose this customer and you're going to do everything in your power to keep them. But the trend is not your friend in this account. Now the story gets even gnarlier and here's the killer chart. It shows the ISV ecosystem platforms that run on graviton too, because AWS has such good engineering and controls its own stack. It can build Arm-based chips that run software designed to run on general purpose x86 systems. Yes, it's true. The ISV, they got to do some work, but large ISV they have a huge incentives because they want to ride the AWS wave. Certainly the user doesn't know or care but AWS cares because it's driving costs and energy consumption down and performance up. Lower cost, higher performance. Sounds like something Amazon wants to consistently deliver, right? And the ISV portfolio that runs on our base graviton and it's just going to continue to grow. And by the way, it's not just Amazon. It's Alibaba, it's Oracle, it's Marvell. It's 10 cents. The list keeps growing Arm, trotted out a number of names. And I would expect over time it's going to be Facebook and Google and Microsoft. If they're not, are you there? Now the last piece of the Arm architecture story that we want to share is the progress that they're making and compare that to x86. This chart shows how Arm is innovating and let's start with the first line under platform capabilities. Number of cores supported per die or, or system. Now die is what ends up as a chip on a small piece of Silicon. Think of the die as circuit diagram of the chip if you will, and these circuits they're fabricated on wafers using photo lithography. The wafers then cut up into many pieces each one, having a chip. Each of these pieces is the chip. And two chips make up a system. The key here is that Arm is quadrupling the number of cores instead of increasing thread counts. It's giving you cores. Cores are better than threads because threads are shared and cores are independent and much easier to virtualize. This is particularly important in situations where you want to be as efficient as possible sharing massive resources like the Cloud. Now, as you can see in the right hand side of the chart under the orange Arm is dramatically increasing the amount of capabilities compared to previous generations. And one of the other highlights to us is that last line that CCIX and CXL support again Arm maybe needs to name these better. These refer to Arms and memory sharing capabilities within and between processors. This allows CPU's GPU's NPS, et cetera to share resources very often efficiently especially compared to the way x86 works where everything is currently controlled by the x86 processor. CCIX and CXL support on the other hand will allow designers to program the system and share memory wherever they want within the system directly and not have to go through the overhead of a central processor, which owns the memory. So for example, if there's a CPU, GPU, NPU the CPU can say to the GPU, give me your results at a specified location and signal me when you're done. So when the GPU is finished calculating and sending the results, the GPU just signals the operation is complete. Versus having to ping the CPU constantly, which is overhead intensive. Now composability in that chart means the system it's a fixed. Rather you can programmatically change the characteristics of the system on the fly. For example, if the NPU is idle you can allocate more resources to other parts of the system. Now, Intel is doing this too in the future but we think Arm is way ahead. At least by two years this is also huge for Nvidia, which today relies on x86. A major problem for Nvidia has been coherent memory management because the utilization of its GPU is appallingly low and it can't be easily optimized. Last week, Nvidia announced it's intent to provide an AI capability for the data center without x86 I.e using Arm-based processors. So Nvidia another big Intel customer is also moving to Arm. And if it's successful acquiring Arm which is still a long shot this trend is only going to accelerate. But the bottom line is if Intel can't move fast enough to stem the momentum of Arm we believe Arm will capture 50% of the enterprise semiconductor spending by 2030. So how does Intel continue to lead? Well, it's not going to be easy. Remember we said, Intel, can't go it alone. And we posited that the company would have to initiate a joint venture structure. We propose a triumvirate of Intel, IBM with its power of 10 and memory aggregation and memory architecture And Samsung with its volume manufacturing expertise on the premise that it coveted in on US soil presence. Now upon further review we're not sure the Samsung is willing to give up and contribute its IP to this venture. It's put a lot of money and a lot of emphasis on infrastructure in South Korea. And furthermore, we're not convinced that Arvind Krishna who we believe ultimately made the call to Jettisons. Jettison IBM's micro electronics business wants to put his efforts back into manufacturing semi-conductors. So we have this conundrum. Intel is fighting AMD, which is already at seven nanometer. Intel has a fall behind in process manufacturing which is strategically important to the United States it's military and the nation's competitiveness. Intel's behind the curve on cost and architecture and is losing key customers in the most important market segments. And it's way behind on volume. The critical piece of the pie that nobody ever talks about. Intel must become more price and performance competitive with x86 and bring in new composable designs that maintain x86 competitive. And give the ability to allow customers and designers to add and customize GPU's, NPUs, accelerators et cetera. All while launching a successful Foundry business. So we think there's another possibility to this thought exercise. Apple is currently reliant on TSMC and is pushing them hard toward five nanometer, in fact sucking up a lot of that volume and TSMC is maybe not servicing some other customers as well as it's servicing Apple because it's a bit destructive, it is distracted and you have this chip shortage. So Apple because of its size gets the lion's share of the attention but Apple needs a trusted onshore supplier. Sure TSMC is adding manufacturing capacity in the US and Arizona. But back to our precarious scenario in the South China sea. Will the U S government and Apple sit back and hope for the best or will they hope for the best and plan for the worst? Let's face it. If China gains control of TSMC, it could block access to the latest and greatest process technology. Apple just announced that it's investing billions of dollars in semiconductor technology across the US. The US government is pressuring big tech. What about an Apple Intel joint venture? Apple brings the volume, it's Cloud, it's Cloud, sorry. It's money it's design leadership, all that to the table. And they could partner with Intel. It gives Intel the Foundry business and a guaranteed volume stream. And maybe the U S government gives Apple a little bit of breathing room and the whole big up big breakup, big tech narrative. And even though it's not necessarily specifically targeting Apple but maybe the US government needs to think twice before it attacks big tech and thinks about the long-term strategic ramifications. Wouldn't that be ironic? Apple dumps Intel in favor of Arm for the M1 and then incubates, and essentially saves Intel with a pipeline of Foundry business. Now back to IBM in this scenario, we've put a question mark on the slide because maybe IBM just gets in the way and why not? A nice clean partnership between Intel and Apple? Who knows? Maybe Gelsinger can even negotiate this without giving up any equity to Apple, but Apple could be a key ingredient to a cocktail of a new strategy under Pat Gelsinger leadership. Gobs of cash from the US and EU governments and volume from Apple. Wow, still a long shot, but one worth pursuing because as we've written, Intel is too strategic to fail. Okay, well, what do you think? You can DM me @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn post. Remember, these episodes are all available as podcasts so please subscribe wherever you listen. I publish weekly on wikibon.com and siliconangle.com. And don't forget to check out etr.plus for all the survey analysis. And I want to thank my colleague, David Floyer for his collaboration on this and other related episodes. This is Dave Vellante for theCUBE insights powered by ETR. Thanks for watching, be well, and we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and most of the spending is going to be.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Pat Gelsinger | PERSON | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
50% | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
$600 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
45% | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
10 cents | QUANTITY | 0.99+ |
South Korea | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
Last week | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Arizona | LOCATION | 0.99+ |
U S | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
1 trillion | QUANTITY | 0.99+ |
2030 | DATE | 0.99+ |
Marvell | ORGANIZATION | 0.99+ |
China | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Moore | PERSON | 0.99+ |
$9 billion | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
EU | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
last week | DATE | 0.99+ |
twice | QUANTITY | 0.99+ |
first line | QUANTITY | 0.99+ |
Okinawa | LOCATION | 0.99+ |
last September | DATE | 0.99+ |
Hong Kong | LOCATION | 0.99+ |