Image Title

Search Results for Azure SQL:

Thomas Cornely Indu Keri Eric Lockard Accelerate Hybrid Cloud with Nutanix & Microsoft


 

>>Okay, we're back with the hybrid Cloud power panel. I'm Dave Ante, and with me our Eric Lockard, who's the corporate vice president of Microsoft Azure Specialized Thomas Corn's, the senior vice president of products at Nutanix. And Indu Carey, who's the Senior Vice President of engineering, NCI and nnc two at Nutanix. Gentlemen, welcome to the cube. Thanks for coming on. >>It's to be >>Here. Have us, >>Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I not just ev put everything in the public cloud. >>Yeah, well, I mean the public cloud has a bunch of inherent advantages, right? I mean it's, it has effectively infinite capacity, the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a, a trend towards public cloud, but you know, not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise, you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise, but also take advantage of, of the cloud for bursting or regionality or expansion, especially coming outta the pandemic. We saw a lot of this from work from home and, and video conferencing and so on, driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >>Yeah, it makes sense. I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the acronyms, but, but the Nutanix cloud clusters on Azure, what is that? What problems does it solve? Give us some color there please. >>Yeah, there, so, you know, cloud clusters on Azure, which we actually call NC two to make it simple and SONC two on Azure is really our solutions for hybrid cloud, right? And you about hybrid cloud, highly desirable customers want it. They, they know this is the right way to do it for them, given that they wanna have workloads on premises at the edge, any public clouds, but it's complicated. It's hard to do, right? And the first thing that you did with just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise and dealing with different portals, networkings get complicated, security gets complicated. And so you heard me say this already, you know, hybrid can be complex. And so what we've done, we then c to Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as is to any Azure region where Ncq is available. Once it's running there, you keep the same operating model, right? And that's, so that's actually super valuable to actually go and do this in a simple fashion, do it faster, and basically do hybrid in a more cost effective fashion, know for all your applications. And that's really what's really special about NC two Azure today. >>So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly, it's an identical experience. Did I get that right? >>This is, this is the key for us, right? Is when you think you're sending on premises, you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model two workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster deploying C to an Azure, it's gonna look like the same cluster that you might be running at the edge or in your own data center using the same tools you, using the same admin constructs to go protect the workloads, make them highly available, do disaster recovery or secure them. All of that becomes the same. But now you are in Azure, and this is what we've spent a lot of time working with Americanist teams on, is you actually have access now to all of those suites of Azure services in from those workloads. So now you get the best of both world, you know, and we bridge them together and you get seamless access of those services between what you get from Nutanix, what you get from Azure. >>Yeah. And as you alluded to, this is traditionally been non-trivial and people have been looking forward to this for, for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this, this is not just a press releases or a PowerPoint, you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >>So let me start with what's unique about this, and I think Thomas and Eric both did a really good job of describing that the best way to think about what we are delivering jointly with Microsoft is that it speeds of the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC two allows you to flip that on its head and say that take your application as is and then lift and shift it to the public cloud, at which point you start the refactor journey. >>And one of the things that you have done really well with the NC two on Azure is that NC two is not something that sits by Azure side. It's fully integrated into the Azure fabric, especially the software defined network and SDN piece. What that means is that, you know, you don't have to worry about connecting your NC two cluster to Azure to some sort of an net worth pipe. You have direct access to the Azure services from the same application that's now running on an NC two cluster. And that makes your refactoring journey so much easier. Your management plan looks the same, your high performance notes let the NVMe notes, they look the same. And really, I mean, other than the facts that you're doing something in the public cloud, all the nutanix's goodness that you're used to continue to receive that, there is a lot of secret sauce that we have had to develop as part of this journey. >>But if we had to pick one that really stands out, it is how do we take the complexity, the network complexity of a public cloud, in this case Azure, and make it as familiar to Nutanix's customers as the VPC construc, the virtual private cloud construc that allows them to really think of that on-prem networking and the public cloud networking in very similar terms. There's a lot more that's gone on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I drew up that, you know, if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third do a flow development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team, and you're very grateful for their >>Support. I, I need NC two for my house. I live in a house that was built in, it's 1687 and we connect all to new and it's, it is a bolt on, but, but, but, and so, but the secret sauce, I mean there's, there's a lot there, but is it a PAs layer? You didn't just wrap it in a container and shove it into the public cloud, You've done more than that. I'm inferring, >>You know, the, it's actually an infrastructure layer offering on top of fid. You can obviously run various types of platform services. So for example, down the road, if you have a containerized application, you'll actually be able to TA it from OnPrem and run it on C two. But the NC two offer itself, the NCAA offer itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know, define tenants to begin with, the hypervisor that you're used to, the network constructs that you're used to light MI segmentation for security purposes, all of them are available to you on NC two in Azure, the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier, that makes your management challenge easier, that makes it much easier for an application person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they've done that much faster than they'll be able to otherwise. >>Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are, that are gonna emerge for the solution? >>Yeah, I mean we've, you know, we've had a solution for a while, you know, this is now new on Azure's gonna extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us, the first one you know, talks about it is a migration. You know, we see customers on that cloud journey. They're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same concept that are around the application and make them, we make them available Now in the Azure region, you can do this for any applications. There's no change to the application, no networking change. The same IP will work the same whether you're running on premises or in Azure. >>The app stays exactly the same, manage the same way, protected the same way. So that's a big one. And you know, the type of drivers point politically or maybe I wanna go do something different or I wanna go and shut down location on premises, I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion, in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which are doing on premises, is disaster recovery. And something that we refer to as elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads. But I think that site sitting in Azure as a small site, just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment, feed over workloads, run them with performance, potentially fill them back to on premises and then shrink back the environment in Azure to again, optimize cost and take advantage of elasticity that you get from public cloud models. >>And then the last one, building on top of that is just the fact that you cannot get bursting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workload and now basically get combined desktop running on premises desktops running on NC two on Azure, same desktop images, same management, same services, and do that as a burst use case during, say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look, I wanna go to desktop as a service, but right now, now I don't want to refactor the entire application stack. I just won't be able to get access to resources on demand in the right place at the right time. >>Makes sense. I mean this is really all about supporting customers', digital transformations. We all talk about how that was accelerated during the pandemic and, but the cloud is a fundamental component of the digital transformations. And Eric, you, you guys have obviously made a commitment between Microsoft and and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >>Well, the ultimate vision is really twofold. I think the one is to, you know, first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their, their administrators and or, or to obviate their investment that they already have in platforms like, like Nutanix. And so the, the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage, leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and and capabilities of of Azure, you know. Second, it is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on-premise Nutanix clusters and bringing the capabilities that that provides to the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from, from two directions. One is from kind of traditional on-prem up into the cloud, and then the second is kind of from the cloud leveraging the investment customers have in in on-premise hci. >>Got it. Thank you. Okay, last question. Maybe each of you could just give us one key takeaway for our audience today. Maybe we start with with with with Thomas and then Indu and then Eric you can bring us home. >>Sure. So the key takeaway is, you know, you takes cloud clusters on Azure is ngi, you know, this is something that we've had tremendous demand from our customers, both from the Microsoft side and the Nutanix side going, going back years literally, right? People have been wanting to go and see this, this is now live GA open for business and you know, we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >>Great Indu >>In our Dave. In a prior life about seven or eight, eight years ago, I was a part of a team that took a popular patch preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million dollars. And if we had had NC two then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >>Okay. Eric, bring us home please. >>Yeah, I'll just point out like this is not something that's just both on or something. We, we, we started yesterday. This is something the teams, both companies have been working on together for, for years really. And it's, it's a way of, of deeply integrating Nutanix into the Azure Cloud and with the ultimate goal of, of again, providing cloud capabilities to the Nutanix customer in a way that they can, you know, take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for, for customers who have significant investments in, in Nutanix clusters on premise, >>Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >>Thank >>You. Thank you. Thank you. >>Okay, keep it right there. You're watching. Accelerate hybrid cloud, that journey with Nutanix and Microsoft technology on the cube. You're leader in enterprise and emerging tech coverage >>Organizations are increasingly moving towards a hybrid cloud model that contains a mix of on premises public and private clouds. A recent study confirms 83% of businesses agree that hybrid multi-cloud is the ideal operating model. Despite its many benefits, deploying a hybrid cloud can be challenging, complex, slow and expensive require different skills and tool sets and separate siloed management interfaces. In fact, 87% of surveyed enterprises believe that multi-cloud success will require simplified management of mixed infrastructures >>With Nutanix and Microsoft. Your hybrid cloud gets the best of both worlds. The predictable costs, performance control and data sovereignty of a private cloud and the scalability, cloud services, ease of use and fractional economics of the public cloud. Whatever your use case, Nutanix cloud clusters simplifies IT. Operations is faster and lowers risk for migration projects, lowers cloud TCO and provides investment optimization and offers effortless, limitless scale and flexibility. Choose NC two to accelerate your business in the cloud and achieve true hybrid cloud success. Take a free self-guided 30 minute test drive of the solutions provisioning steps and use cases at nutanix.com/azure td. >>Okay, so we're just wrapping up accelerate hybrid cloud with Nutanix and Microsoft made possible by Nutanix where we just heard how Nutanix is partnering with cloud and software leader Microsoft to enable customers to execute on a true hybrid cloud vision with actionable solutions. We pushed and got the answer that with NC two on Azure, you get the same stack, the same performance, the same networking, the same automation, the same workflows across on-prem and Azure Estates. Realizing the goal of simplifying and extending on-prem workloads to any Azure region to move apps without complicated refactoring and to be able to tap the full complement of native services that are available on Azure. Remember, all these videos are available on demand@thecube.net and you can check out silicon angle.com for all the news related to this announcement and all things enterprise tech. Please go to nutanix.com as of course information about this announcement and the partnership, but there's also a ton of resources to better understand the Nutanix product portfolio. There are white papers, videos, and other valuable content, so check that out. This is Dave Ante for Lisa Martin with the Cube, your leader in enterprise and emerging tech coverage. Thanks for watching the program and we'll see you next time.

Published Date : Oct 12 2022

SUMMARY :

the senior vice president of products at Nutanix. I mean, I not just ev put everything in the public cloud. I mean it's, it has effectively infinite capacity, the ability to, you know, I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the And the first thing that you did with just silos, right? Did I get that right? C to an Azure, it's gonna look like the same cluster that you might be running at the edge this is not just a press releases or a PowerPoint, you had to do some some engineering and shift it to the public cloud, at which point you start the refactor journey. And one of the things that you have done really well with the NC two on Azure is And by the way, I'll tell you a funny sort of anecdote. and shove it into the public cloud, You've done more than that. to the high performance storage that you know, define tenants to begin with, the hypervisor that What, what are you seeing, what are the use cases that are, that are gonna emerge for the solution? the first one you know, talks about it is a migration. And you know, the type of drivers point politically And pretty much complimenting the notion of, look, I wanna go to desktop as a service, during the pandemic and, but the cloud is a fundamental component of the digital transformations. and bringing the capabilities that that provides to the Nutanix customer Maybe each of you could just give us one key takeaway ngi, you know, this is something that we've had tremendous demand from our customers, And that's really the value of this. into the Azure Cloud and with the ultimate goal of, of again, Love the co-engineering and the ability to take advantage of those cloud native Thank you. and Microsoft technology on the cube. of businesses agree that hybrid multi-cloud is the ideal operating model. economics of the public cloud. We pushed and got the answer that with NC two on Azure, you get the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ThomasPERSON

0.99+

EricPERSON

0.99+

Eric LockardPERSON

0.99+

MicrosoftORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

four yearsQUANTITY

0.99+

Dave AntePERSON

0.99+

demand@thecube.netOTHER

0.99+

Indu CareyPERSON

0.99+

nutanixORGANIZATION

0.99+

NCAAORGANIZATION

0.99+

87%QUANTITY

0.99+

30 minuteQUANTITY

0.99+

83%QUANTITY

0.99+

SecondQUANTITY

0.99+

bothQUANTITY

0.99+

Lisa MartinPERSON

0.99+

both companiesQUANTITY

0.99+

two directionsQUANTITY

0.99+

PowerPointTITLE

0.99+

OneQUANTITY

0.99+

AzureTITLE

0.99+

NC twoTITLE

0.98+

yesterdayDATE

0.98+

secondQUANTITY

0.98+

eachQUANTITY

0.98+

firstQUANTITY

0.98+

NCIORGANIZATION

0.98+

both worldsQUANTITY

0.98+

first stepQUANTITY

0.98+

one key takeawayQUANTITY

0.98+

Azure CloudTITLE

0.97+

todayDATE

0.97+

C twoTITLE

0.97+

two thingsQUANTITY

0.96+

oneQUANTITY

0.95+

NcqLOCATION

0.95+

nnc twoORGANIZATION

0.95+

several hundred million dollarsQUANTITY

0.95+

InduPERSON

0.95+

thirdQUANTITY

0.94+

Azure SQLTITLE

0.94+

eight years agoDATE

0.94+

silicon angle.comOTHER

0.93+

Thomas Cornely Indu Keri Eric Lockard Nutanix Signal


 

>>Okay, we're back with the hybrid Cloud power panel. I'm Dave Ante and with me our Eric Lockhart, who's the corporate vice president of Microsoft Azure, Specialized Thomas Corny, the senior vice president of products at Nutanix, and Indu Care, who's the Senior Vice President of engineering, NCI and nnc two at Nutanix. Gentlemen, welcome to the cube. Thanks for coming on. >>It's to >>Be here. Have us, >>Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I wanna just ev put everything in the public cloud. >>Yeah, well, I mean, the public cloud has a bunch of inherent advantages, right? I mean, it's, it has effectively infinite capacity, the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a, a trend towards public cloud, but you know, not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise, you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise, but also take advantage of, of the cloud for bursting or regionality or expansion, especially coming outta the pandemic. We saw a lot of this from work from home and, and video conferencing and so on, driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >>Yeah, makes sense. I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the acronyms, but, but the Nutanix Cloud clusters on Azure, what is that? What problems does it solve? Give us some color there, please. >>That is, so, you know, cloud clusters on Azure, which we actually call NC two to make it simple. And so NC two on Azure is really our solutions for hybrid cloud, right? And you think about the hybrid cloud, highly desirable customers want it. They, they know this is the right way to do for them, given that they wanna have workloads on premises at the edge, any public clouds. But it's complicated. It's hard to do, right? And the first thing that you deal with is just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise and dealing with different portals. Networkings get complicated, security gets complicated. And so you heard me say this already, you know, hybrid can be complex. And so what we've done, we then c to Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as is to any Azure region where ncq is available. Once it's running there, you keep the same operating model, right? And that's something actually super valuable to actually go and do this in a simple fashion, do it faster, and basically do, do hybrid in a more cost effective fashion, know for all your applications. And that's really what's really special about NC Azure today. >>So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly, it's an identical experience. Did I get that right? >>This is, this is the key for us, right? Is when you think you're sending on premises, you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model two workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster deploying C to an Azure, it's gonna look like the same cluster that you might be running at the edge or in your own data center, using the same tools, using, using the same admin constructs to go protect the workloads, make them highly available with disaster recovery or secure them. All of that becomes the same, but now you are in Azure, and this is what we've spent a lot of time working with Americanist teams on, is you actually have access now to all of those suites of Azure services in from those workloads. So now you get the best of both world, you know, and we bridge them together and you get seamless access of those services between what you get from Nutanix, what you get from Azure. >>Yeah. And as you alluded to, this is traditionally been non-trivial and people have been looking forward to this for, for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this, this is not just a press releases or a PowerPoint, you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >>So let me start with what's unique about this, and I think Thomas and Eric both did a really good job of describing that the best way to think about what we are delivering jointly with Microsoft is that it speeds up the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC two allows you to flip that on its head and say that take your application as is and then lift and shift it to the public cloud, at which point you start the refactor journey. >>And one of the things that you have done really well with the NC two on Azure is that NC two is not something that sits by Azure side. It's fully integrated into the Azure fabric, especially the software defined network and SDN piece. What that means is that, you know, you don't have to worry about connecting your NC two cluster to Azure to some sort of a net worth pipe. You have direct access to the Azure services from the same application that's now running on an C2 cluster. And that makes your refactoring journey so much easier. Your management claim looks the same, your high performance notes let the NVMe notes, they look the same. And really, I mean, other than the facts that you're doing something in the public cloud, all the Nutanix goodness that you're used to continue to receive that, there is a lot of secret sauce that we have had to develop as part of this journey. >>But if we had to pick one that really stands out, it is how do we take the complexity, the network complexity, offer public cloud, in this case Azure, and make it as familiar to Nutanix's customers as the VPC construc, the virtual private cloud construct that allows them to really think of their on-prem networking and the public cloud networking in very similar terms. There's a lot more that's gone on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I drew up that, you know, if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third do a code development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team, and you're very grateful for their support. >>I need NC two for my house. I live in a house that was built and it's 1687 and we connect old to new and it's, it is a bolt on, but, but, but, and so, but the secret sauce, I mean there's, there's a lot there, but is it a PAs layer? You didn't just wrap it in a container and shove it into the public cloud, You've done more than that. I'm inferring, >>You know, the, it's actually an infrastructure layer offering on top of fid. You can obviously run various types of platform services. So for example, down the road, if you have a containerized application, you'll actually be able to tat it from OnPrem and run it on C two. But the NC two offer itself, the NCAA often itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know, define Nutanix to begin with, the hypervisor that you're used to, the network constructs that you're used to light MI segmentation for security purposes, all of them are available to you on NC two in Azure, the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier, that makes your management challenge easier, that makes it much easier for an accusation person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they have done that much faster than they'll be able to otherwise. >>Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are, that are gonna emerge for this solution? >>Yeah, I mean we've, you know, we've had a solution for a while and you know, this is now new on Azure is gonna extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us, the first one you know, talks about it is a migration. You know, we see customers on the cloud journey, they're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same culture that are around the application and make them, we make them available Now in the Azure region, you can do this for any applications. There's no change to the application, no networking change. The same IP will work the same whether you're running on premises or in Azure. >>The app stays exactly the same, manage the same way, protected the same way. So that's a big one. And you know, the type of drivers point to politically or maybe I wanna go do something different or I wanna go and shut down education on premises, I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion, in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which are doing on premises IT disaster recovery and something that we refer to as elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads, but I that site sitting in Azure as a small site, just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment, feed over workloads, run them with performance, potentially feed them back to on premises and then shrink back the environment in Azure to again, optimize cost and take advantage of elasticity that you get from public cloud models. >>Then the last one, building on top of that is just the fact that you cannot get boosting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workload and now basically get combined desktops running on premises desktops running on NC two on Azure, same desktop images, same management, same services, and do that as a burst use case during, say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look, I wanna go to desktop as a service, but right now I don't want to refactor the entire application stack. I just wanna be able to get access to resources on demand in the right place at the right time. >>Makes sense. I mean this is really all about supporting customers', digital transformations. We all talk about how that was accelerated during the pandemic and, but the cloud is a fundamental component of the digital transformation generic. You, you guys have obviously made a commitment between Microsoft and and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >>Well, the ultimate vision is really twofold. I think the one is to, you know, first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their, their administrators and or or to obviate their investment that they already have and platforms like, like Nutanix. And so the, the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage, leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and and capabilities of, of Azure. You know, Second is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on premise Nutanix clusters and bringing the capabilities that that provides to the, the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from, from two directions. One is from kind of traditional on-premise up into the cloud and then the second is kind of from the cloud leveraging the investment customers have in in on-premise hci. >>Got it. Thank you. Okay, last question. Maybe each of you can just give us one key takeaway for our audience today. Maybe we start with with with with Thomas and then Indu and then Eric you can bring us home. >>Sure. So the key takeaway is, you know, Nutanix Cloud clusters on Azure is now ga you know, this is something that we've had tremendous demand from our customers, both from the Microsoft side and the Nutanix side going, going back years literally, right? People have been wanting to go and see this, this is now live GA open for business and you know, we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >>Great Indu >>In our Dave. In a prior life about seven or eight, eight years ago, I was a part of a team that took a popular cat's preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million. And if we had had NC two then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >>Okay. Eric, bring us home please. >>Yeah, I'll just point out like this is not something that's just both on or something. We, we, we started yesterday. This is something the teams, both companies have been working on together for, for years, really. And it's, it's a way of, of deeply integrating Nutanix into the Azure Cloud and with the ultimate goal of, of again, providing cloud capabilities to the Nutanix customer in a way that they can, you know, take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for, for customers who have significant investments in, in Nutanix clusters on premise, >>Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >>Thank >>You. Thank you. >>Okay. Keep it right there. You're watching Accelerate Hybrid Cloud, that journey with Nutanix and Microsoft technology on the cube. You're a leader in enterprise and emerging tech coverage.

Published Date : Oct 10 2022

SUMMARY :

the senior vice president of products at Nutanix, and Indu Care, who's the Senior Vice President of Have us, What's driving the I mean, it's, it has effectively infinite capacity, the ability to, you know, I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the And the first thing that you deal with is just silos, right? Did I get that right? C to an Azure, it's gonna look like the same cluster that you might be running at the edge So what specific engineering work did you guys do and what's unique about this relative then lift and shift it to the public cloud, at which point you start the refactor And one of the things that you have done really well with the NC two on Azure is And by the way, I'll tell you a funny sort of anecdote. and shove it into the public cloud, You've done more than that. to the high performance storage that you know, define Nutanix to begin with, the hypervisor that What, what are you seeing, what are the use cases that are, that are gonna emerge for this solution? the first one you know, talks about it is a migration. And you know, the type of drivers point to politically VDI environments that we see running on premises and I have, you know, a seasonal requirement to How should customers, you know, measure that? And so the, the work that companies have done together here, you know, Maybe each of you can just give us one key takeaway for now ga you know, this is something that we've had tremendous demand from our customers, And that's really the value of this. can, you know, take advantage of the cloud and then compliment those applications over Love the co-engineering and the ability to take advantage of those cloud native and Microsoft technology on the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ThomasPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Eric LockhartPERSON

0.99+

EricPERSON

0.99+

NutanixORGANIZATION

0.99+

four yearsQUANTITY

0.99+

Indu CareORGANIZATION

0.99+

Dave AntePERSON

0.99+

NCIORGANIZATION

0.99+

bothQUANTITY

0.99+

NCAAORGANIZATION

0.99+

PowerPointTITLE

0.99+

both companiesQUANTITY

0.99+

two directionsQUANTITY

0.99+

AzureTITLE

0.99+

Thomas CornyPERSON

0.98+

SecondQUANTITY

0.98+

OneQUANTITY

0.98+

C twoTITLE

0.98+

eachQUANTITY

0.98+

secondQUANTITY

0.98+

InduPERSON

0.98+

yesterdayDATE

0.98+

both worldsQUANTITY

0.98+

oneQUANTITY

0.98+

first stepQUANTITY

0.98+

nnc twoORGANIZATION

0.98+

firstQUANTITY

0.97+

Azure CloudTITLE

0.97+

two thingsQUANTITY

0.97+

one key takeawayQUANTITY

0.97+

NC twoTITLE

0.97+

first thingQUANTITY

0.97+

todayDATE

0.96+

thirdQUANTITY

0.95+

both worldQUANTITY

0.93+

several hundred millionQUANTITY

0.92+

first oneQUANTITY

0.92+

1687DATE

0.92+

eight years agoDATE

0.91+

one thirdQUANTITY

0.9+

pandemicEVENT

0.9+

DavePERSON

0.9+

Video Exclusive: Oracle EVP Juan Loaiza Announces Lower Priced Entry Point for ADB


 

(upbeat music) >> Oracle is in the midst of an acceleration of its product cycles. It really has pushed new capabilities across its database, the database platforms, and of course the cloud in an effort to really maintain its position as the gold standard for cloud database. We've reported pretty extensively on Exadata, most recently the X9M that increased database IOPS and throughput. Organizations running mission critical OLTP, analytics and mix workloads tell us that they've seen meaningfully improved performance and lower costs, which you expect in a technology cycle. I often say if Oracle calls you out by name it's a compliment and it means you've succeeded. So just a couple of weeks ago, Oracle turned up the heat on MongoDB with a Mongo compatible API, in an effort to persuade developers to run applications in a autonomous database and on OCI, Oracle cloud infrastructure. There was a big emphasis by Oracle on acid compliance transactions and automatic scaling as well as access to multiple data types. This caught my attention because in the early days of no SQL, there was a lot of chatter from folks about not needing acid capability in the database anymore. Funny how that comes around. And anyway, you see Oracle investing, they spend money in R&D We've always said that`, they're protecting their moat. Now in social I've seen some criticisms like Oracle still is not adding enough new logos, and Oracle of course will dispute that and give you some examples. But to me what's most impressive is the big name customers that Oracle gets to talk in public. Deutsche Bank, Telephonic, Experian, FedEx, I mean dozens and dozens and dozens. I work with a lot of companies and the quality of the customers Oracle puts in front of analysts like myself is very very high. At the top of the list I would say. And they're big spending customers. And as we said many times when it comes to mission critical workloads, Oracle is the king. And one of the executives behind the success is a longtime Cube alum, Juan Loaiza who's executive vice president of mission critical technologies at Oracle. And we've invited him back on today to talk about some news and Oracle's latest developments and database, Juan welcome back to the show and thanks for coming on today and talking about today's announcement. >> I'm very happy to be here today with you. >> Okay, so what are you announcing and how does this help organizations particularly with those existing Exadata cloud at customer installations? >> Yeah, the big thing we're announcing is our very successful cloud at customer platform. We're extending the capabilities of our autonomous database running on it. And specifically we're allowing much smaller configurations so customers can start small and grow with our autonomous database on our cloud customer platform. >> So let's get into granularity a little bit and double click on this. Can you go over how customers, carve up VM clusters for different workloads? What's the tangible benefit to them? >> Yeah, so it's pretty straightforward. We deploy our Cloud@Customer system anywhere the customer wants it, let's say in their data center. And then through our cloud APIs and GUIs they can carve up into pieces into basically VMs. They can say, Hey, I want a VM with eight CPUs to do this, I want a VM with 20 CPUs to that, I want a 500 CPUVM to do something else. And that's what we call a VM cluster because in Cloud@Customer, it is a highly available environment. So you don't just get one VM, you get a cluster of highly available VMs. So you carve it up. You hand it out to different aspects of a company. You might have development on one, testing on another one, some production sales on one VM, marketing on a different VM. And then you run your databases in there and that's kind of how it works and it's all done completely through our GUI and it's very, very simple 'cause they use it the same cloud APIs and GUIs that we use in the public cloud. It is the same APIs and GUIs that we use in the public cloud. >> Yeah, I was going to say sounds like cloud. So what about prerequisites? What do customers have to do to take advantage of the new capabilities? Can they run it on an Exadata cloud a customer that they installed a couple years ago? Do they have to upgrade the hardware? What migration pain is involved? >> Yeah, there's no pain, so it's just, (coughs) excuse me. I can take their existing system, they get our free software update and they can just deploy autonomous database as a VM in their existing Exadata cloud system. >> Oh nice okay what's the bottom line dollars? Our audience are always interested in cutting costs. It's one of the reasons they're moving to the cloud for example. So how does autonomous database on VM clusters, on Exadata Cloud at Customer? How does it help cut their cost? >> Well, it's pretty straightforward. So previous to this a customer would have to have dedicated a system to either autonomous database or to non autonomous data. So you have to choose one together. So on a system by system basis, you chose I want this thing autonomous, or I don't want it autonomous. Now you carve in the VMs and say for this VM I want that autonomous for that VM I want to run a regular database managed database on there. So lets customers now start small with any size they want. They could start with two CPUs and run an autonomous database and that's all they pay for is the two CPUs that they use. >> Let's talk a little about traction. I mean, I remember we covered the original Exadata announcement quite a long time ago and it's obviously evolved and taken many forms. Look, it's hard to argue that it hasn't been a big success. It has for Oracle and your target customers. Does this announcement make Exadata cloud a customer more attractive for smaller companies. In other words, does it expand the team for ADB? And if so, how? >> Yeah, absolutely. I mean our Exadata cloud platform is extremely successful. We have thousands of deployments, we have on our data platform we have about almost 90% of the global fortune 100 and thousands of smaller customers. In the cloud we have now up to 40% of the global 100 a hundred biggest companies in the world running on that. So it's been extremely successful platform and cloud a customer is super key. A lot of customers can't move their data to the public cloud. So we bring the public cloud to them with our cloud customer offering. And so that's the big customer is the fortune hundred but we have thousands of smaller customers also. And the nice thing about this offering is we can start with literally two CPUs. So we can be a very small customer and still run our autonomous data based on our cloud customer platform. >> Well, everybody cares about security and governance. I mean, especially the big guys, but the little guys that in many ways as well they want the capabilities of the large companies but they can't necessarily afford it. So I want to talk about security in particular governance and it's especially important for mission-critical apps. So how does this all change the security in governance paradigm? What do customers need to know there? >> Yeah, so the beauty of autonomous database which is the thing that we're talking about today is Oracle deals with all the security. So the OS, the hardware, firmware, VMs, the database itself all the interfaces to the VM, to the database all that is it's all done by Oracle. So, which is incredibly important because there's a constant stream of security alerts that are coming out and it's very difficult for customers to keep up with this stuff. I mean, it's hard for us and we have thousands of engineers. And so we take that whole burden away from customers. And you just don't have to think about it, we deal with it. So once you deploy an autonomous database it is always secure because anytime a security alert comes out, we will apply that and we do it in an online fashion also. So it's really, particularly for smaller customers it's even harder because to keep up with all the security that you you need a giant team of security experts and even the biggest customers struggle with that and a small customer's going to really struggle. There's just two, you have to look at the entire stack, all the different components switches, firmware, OS, VMs, database, everything. It's just very difficult to keep up. So we do it all and for small cut, they just can't do it. So really they really need to partner with a company like Oracle that has thousands of engineers that can keep up with this stuff. >> It's true what you say, even large customers this CSOs will tell you that lack of talent, lack of skill sets. They just don't have enough people and so even the big guys can't keep up. Okay, I want you to pitch me as though I'm a developer, which I'm not, but we got a lot of developers in our community. We'll be Cube con next month in Valencia, sell me on why a developer should lean into ADB on Exadata cloud as a customer? >> Yeah, it's very straightforward. So Oracle has the most advanced database in the industry and that's widely recognized by database analysts and experts in the field. Traditionally, it's been hard for a developer to use it because it's been hard to manage. It's been hard to set up, install, configure, patch, back up all that kind of stuff. Autonomous database does it all for you. So as a developer, you can just go into our console, click on creating a database. We ask you four questions, how big, how many CPUs how much storage and say, give your password. And within minutes you have a database. And at that point you can go crazy and just develop. And you don't have to worry about managing the database, patching the database, maintaining the security and the database backing up to all that stuff. You can instantly scale it. You can say, Hey, I want to grow it, you just click a button, take, grow it to much any size you want and you get all the mission critical capabilities. So it works for tiny databases but it is a stock exchange quality in terms of performance, availability, security it's a rock solid database that's super trivial. So what used to be a very complex thing is now completely trivial for a developer. So they get the best of both worlds, they get everything on the database side and it it's trivial for them to use. >> Wow, if you're doing all that stuff for 'em are they going to do on their weekends? Code? (chuckles) >> They should be developing their application and add value to their company that's kind of what they should focus on. And they can be looking at all sorts of new technologies like JSON and the database machine learning in the database graph in the database. So you can build very sophisticated applications because you don't have to worry about the database anymore. >> All right, let's talk about the competition. So it's always a topic I like to bring up with you. From a competitive perspective how is this latest and instantiation of Exadata cloud a customer X9M how's this different from running an AWS database service for instance on outpost, or let's say I want to run SQL server on Azure Stack or whatever Microsoft's calling it these days. Give us the competitive angle here. >> Yeah, there kind of is no real competition. So both Amazon and Microsoft have an at customer solution but they're very primitive. I mean, just to give you an example like Amazon doesn't run any of their premier database offerings at customers. So whether it's Aurora Redshift, doesn't run just plane does not run. It's not that it runs badly or it's got limited, just does not run. They can't run Oracle RDS on premise and same thing with Microsoft. They can't run Azure SQL, which is their premier database on their act customer platform. So that kind of tells you how limited that platform is when even their own premier offerings doesn't run on it. In contrast, we're running Exadata with our premier autonomous database. So it's our premier platform that's in use today by most of the biggest, banks, telecom to retailers et cetera in the world, thousands of smaller customers. So it's super mission critical, super proven with our premier cloud database, which is autonomous theory. So it couldn't be more black and white, this is a case where it's there really is no competition in the cloud of customer space on the database side. >> Okay, but let me follow up on that, Juan, if I may, so, okay. So it took you guys a while to get to the cloud, it's taken them a while to figure it on-prem. I mean, aren't they going to eventually sort of get there? What gives you confidence that you'll be able to to keep ahead? >> Well, there's two things, right? One is we've been doing this for a long time. I mean, that's what Oracle initially started as an on-prem and our Exadata platform has been available for over a decade. And we have a ton of experience on this. We run the biggest banks in the world already, it's not some hope for the future. This is what runs today. And our focus has always been a combination of cloud and on-prem their heart's not really in the on-prem stuff they really like. Amazon's really a public cloud only vendor and you can see from the result, it's not you can say, they can say whatever they want but you can see the results. Their outpost platform has been available for several years now and it still doesn't even run their own products. So you can kind of see how hard they're trying and how much they really care about this market. >> All right, boil it down if you just had a few things that you'd tell someone about why they should run ADB on Exadata cloud at customer, what would you say? >> It's pretty simple, which is it's the world's most sophisticated database made completely simple, that's it? So you get a stock exchange level database, you can start really small and grow and it's completely trivial to run because Oracle is automated everything within our autonomous data we use machine learning and a lot of automation to automate everything around the database. So it's kind of the best of both worlds. The best possible database starts as small as you want and is the simplest database in the world. >> So I probably should have asked you this while I was pushing the competitive question but this may be my last question, I promise. It's the age old debate It rages on, you got specialized databases kind of a right tool for the right job approach. That's clearly where Amazon is headed or what Oracle refers to is converge database. Oracle says its approach is more complete and "simpler." Take us through your thinking on this and the latest positioning so the audience can understand it a bit better. >> Yeah, so apps aren't what they used to business apps, data driven apps aren't what they used to be. They used to be kind of green screens where you just entered data. Now everyone's a very sophisticated app, they want to be have location, they want to have maps, they want to have graph in there. They want to have machine learning, they want machine learning built into the app. So they want JSON they want text, they want text search. So all these capabilities are what a modern app has to support. And so what Oracle's done is we provided a single solution that provides everything you need to build a modern app and it's all integrated together. It's all transactional. You have analytics built into the same thing. You have reporting built into the same thing. So it has everything you need to build a modern app. In contrast, what most of our competitors do is they give you these little solutions, say, okay here you do machine learning over here, you do analytics over there, you do JSON over here, you do spatial over here you do graph over there. And then it's left a developer to put an app together from all these pieces. So it's like getting the pieces of a card and having to assemble it yourself and then maintain it for the rest of your life, which is the even harder part. So one part upgrades, you got to test that. So of other piece upgrade or changes, you got to test that, you got to deal with all the security problems of all these different systems. You have to convert the data, you have to move the data back and forth it's extraordinarily complicated. Our converge database, the data sits in one place and all the algorithms come to the data. It's very simple, it is dramatically simpler. And then autonomous database is what makes managing it trivial. You don't really have to manage anything more because Oracle's automated the whole thing. >> So, Juan, we got a pretty good Cadence going here. I mean I really appreciate you coming on and giving us these little video exclusives. You can tell by again, that Cadence how frequently you guys are making new announcements. So that's great, congrats on yet another announcement. Thanks for coming back in the program appreciate it. >> Yeah, of course we invest heavily in data management. That's our core and we will continue to do that. I mean, we're investing billions of dollars a year and we intend to stay the leaders in this market. >> Great stuff and thank you for watching the Cube, your leader in enterprise tech coverage, this is Dave Vellante we'll see you next time.

Published Date : Mar 16 2022

SUMMARY :

and of course the cloud be here today with you. Yeah, the big thing we're announcing What's the tangible benefit to them? So you don't just get one VM, Do they have to upgrade the hardware? and they can just deploy It's one of the reasons So on a system by system basis, you chose and it's obviously evolved And so that's the big customer I mean, especially the big and even the biggest and so even the big guys can't keep up. and the database backing So you can build very about the competition. So that kind of tells you how limited So it took you guys a and you can see from the result, So it's kind of the best of both worlds. and the latest positioning and all the algorithms come to the data. I mean I really appreciate you coming on and we intend to stay the you for watching the Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

FedExORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ExperianORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

JuanPERSON

0.99+

Deutsche BankORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

TelephonicORGANIZATION

0.99+

20 CPUsQUANTITY

0.99+

ValenciaLOCATION

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

todayDATE

0.99+

two CPUsQUANTITY

0.99+

two thingsQUANTITY

0.99+

twoQUANTITY

0.99+

CadenceORGANIZATION

0.99+

four questionsQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.98+

thousands of deploymentsQUANTITY

0.98+

eight CPUsQUANTITY

0.98+

bothQUANTITY

0.98+

Azure StackTITLE

0.98+

MongoDBTITLE

0.98+

Azure SQLTITLE

0.97+

both worldsQUANTITY

0.97+

JSONTITLE

0.97+

over a decadeQUANTITY

0.96+

next monthDATE

0.96+

single solutionQUANTITY

0.94+

Aurora RedshiftTITLE

0.94+

one VMQUANTITY

0.94+

ADBORGANIZATION

0.94+

SQLTITLE

0.94+

thousands of engineersQUANTITY

0.94+

100QUANTITY

0.94+

one partQUANTITY

0.93+

billions of dollars a yearQUANTITY

0.93+

up to 40%QUANTITY

0.93+

500 CPUVMQUANTITY

0.92+

one placeQUANTITY

0.92+

couple of weeks agoDATE

0.92+

couple years agoDATE

0.87+

dozensQUANTITY

0.87+

MongoTITLE

0.87+

ExadataORGANIZATION

0.86+

CubeORGANIZATION

0.85+

Juan Loaiza, Oracle | CUBE Conversation, September 2021


 

(bright music) >> Hello, everyone, and welcome to this CUBE video exclusive. This is Dave Vellante, and as I've said many times what people sometimes forget is Oracle's chairman is also its CTO, and he understands and appreciates the importance of engineering. It's the lifeblood of tech innovation, and Oracle continues to spend money on R and D. Over the past decade, the company has evolved its Exadata platform by investing in core infrastructure technology. For example, Oracle initially used InfiniBand, which in and of itself was a technical challenge to exploit for higher performance. That was an engineering innovation, and now it's moving to RoCE to try and deliver best of breed performance by today's standards. We've seen Oracle invest in machine intelligence for analytics. It's converged OLTB and mixed workloads. It's driving automation functions into its Exadata platform for things like indexing. The point is we've seen a consistent cadence of improvements with each generation of Exadata, and it's no secret that Oracle likes to brag about the results of its investments. At its heart, Oracle develops database software and databases have to run fast and be rock solid. So Oracle loves to throw around impressive numbers, like 27 million AKI ops, more than a terabyte per second for analytics scans, running it more than a terabyte per second. Look, Oracle's objective is to build the best database platform and convince its customers to run on Oracle, instead of doing it themselves or in some other cloud. And because the company owns the full stack, Oracle has a high degree of control over how to optimize the stack for its database. So this is how Oracle intends to compete with Exadata, Exadata Cloud@Customer and other products, like ZDLRA against AWS Outposts, Azure Arc and do it yourself solutions. And with me, to talk about Oracle's latest innovation with its Exadata X9M announcement is Juan Loaiza, who's the Executive Vice President of Mission Critical Database Technologies at Oracle. Juan, thanks for coming on theCUBE, always good to see you, man. >> Thanks for having me, Dave. It's great to be here. >> All right, let's get right into it and start with the news. Can you give us a quick overview of the X9M announcement today? >> Yeah, glad to. So, we've had Exadata on the market for a little over a dozen years, and every year, as you mentioned, we make it better and better. And so this year we're introducing our X9M family of products, and as usual, we're making it better. We're making it better across all the different dimensions for OLTP, for analytics, lower costs, higher IOPs, higher throughputs, more capacity, so it's better all around, and we're introducing a lot of new software features as well that make it easier to use, more manageable, more highly available, more options for customers, more isolation, more workload consolidation, so it's our usual better and better every year. We're already way ahead of the competition in pretty much every metric you can name, but we're not sitting back. We have the pedal to the metal and we're keeping it there. >> Okay, so as always, you announced some big numbers. You're referencing them. I did in my upfront narrative. You've claimed double to triple digit performance improvements. Tell us, what's the secret sauce that allows you to achieve that magnitude of performance gain? >> Yeah, there's a lot of secret sauce in Exadata. First of all, we have custom designed hardware, so we design the systems from the top down, so it's not a generic system. It's designed to run database with a specific and sole focus of running database, and so we have a lot of technologies in there. Persistent memory is a really big one that we've introduced that enables super low response times for OLTP. The RoCE, the remote RDMA over convergency ethernet with a hundred gigabit network is a big thing, offload to storage servers is a big thing. The columnar processing of the storage is a huge thing, so there's a lot of secret sauce, most of it is software and hardware related and interesting about it, it's very unique. So we've been introducing more and more technologies and actually advancing our lead by introducing very unique, very effective technologies, like the ones I mentioned, and we're continuing that with our X9 generation. >> So that persistent memory allows you to do a right directly, atomic right directly to memory, and then what, you update asynchronously to the backend at some point? Can you double click on that a little bit? >> Yeah, so we use persistent memory as kind of the first tier of storage. And the thing about persistent memory is persistent. Unlike normal memory, it doesn't lose its contents when you lose power, so it's just as good as flash or traditional spinning disks in terms of storing data. And the integration that we do is we do what's called remote direct memory access, that means the hardware sends the new data directly into persistent memory and storage with no software, getting rid of all the software layers in between, and that's what enables us to achieve this extremely low latency. Once it's in persistent memory, it's stored. It's as good as being in flash or disc. So there's nothing else that we need to do. We do age things out of persistent memory to keep only hot data in there. That's one of the tricks that we do to make sure, because persistent memory is more expensive than flash or disc, so we tier it. So we age data in and out as it becomes hot, age it out as it becomes cold, but once it's in persistent memory, it's as good as being stored. It is stored. >> I love it. Flash is a slow tier now. So, (laughs) let's talk about what this-- >> Right, I mean persistent memory is about an order of magnitude faster. Flash is more than an order of magnitude faster than disk drive, so it is a new technology that provides big benefits, particularly for latency on OLTP. >> Great, thank you for that, okay, we'll get out of the plumbing. Let's talk about what this announcement means to customers. How does all this performance, and you got a lot of scale here, how does it translate into tangible results say, for a bank? >> Yeah, so there's a lot of ways. So, I mentioned performance is a big thing, always with Exadata. We're increasing the performance significantly for OLTP, analytics, so OLTP, 50, 60% performance improvements, analytics, 80% performance improvements in terms of costs, effectiveness, 30 to 60% improvement, so all of these things are big benefits. You know, one of the differences between a server product like Exadata and a consumer product is performance translates in the cost also. If I get a new smartphone that's faster, it doesn't actually reduce my costs, it just makes my experience a little better. But with a server product like Exadata, if I have 50% faster, I can translate that into I can serve 50% more users, 50% more workload, 50% more data, or I can buy a 50% smaller system to run the same workload. So, when we talk about performance, it also means lower costs, so if big customers of ours, like banks, telecoms, retailers, et cetera, they can take that performance and turn it into better response times. They can also take that performance and turn it into lower costs, and everybody loves both of those things, so both of those are big benefits for our customers. >> Got it, thank you. Now in a move that was maybe a little bit controversial, you stated flat out that you're not going to bother to compare Exadata cloud and customer performance against AWS Outposts and Azure Stack, rather you chose to compare to RDS, Redshift, Azure SQL. Why, why was that? >> Yeah, so our Exadata runs in the public cloud. We have Exadata that runs in Cloud@Customer, and we have Exadata that runs on Prem. And Azure and Azure Stack, they have something a little more similar to Cloud@Customer. They have where they take their cloud solutions and put them in the customer data center. So when we came out with our new X8, 9M Cloud@Customer, we looked at those technologies and honestly, we couldn't even come up with a good comparison with their equivalent, for example, AWS Outpost, because those products really just don't really run. For example, the two database products that Outposts promote or that Amazon promotes is Aurora for OLTP and Redshift for analytics. Well, those two can't even run at all on their Outposts product. So, it's kind of like beating up on a child or something. (laughs) It doesn't make sense. They're out of our weight class, so we're not even going to compare against them. So we compared what we run, both in public cloud and Cloud@Customer against their best product, which is the Redshifts and the Auroras in their public cloud, which is their most scalable available products. With their equivalent Cloud@Customer, not only does it not perform, it doesn't run at all. Their Premiere products don't run at all on those platforms. >> Okay, but RDS does, right? I think, and Redshift and Azure SQL, right, will run a their version, so you compare it against those. What were the results of the benchmarks when you did made those comparisons? >> Yeah, so compared against their public cloud or Cloud@Customer, we generally get results that are something like 50 times lower latency and close to a hundred times higher analytic throughput, so it's orders of magnitude. We're not talking 50%, we're talking 50 times, so compared to those products, there really is kind of, we're in a different league. It's kind of like they're the middle school little league and we're the professional team, so it's really dramatically different. It's not even in the same league. >> All right, now you also chose to compare the X9M performance against on-premises storage systems. Why and what were those results? >> Yeah, so with the on-premises, traditionally customers bought conventional storage and that kind of stuff, and those products have advanced quite a bit. And again, those aren't optimized. Those aren't designed to run database, but some customers have traditionally deployed those, you know, there's less and less these days, but we do get many times faster both on OLTP and analytic performance there, I mean, with analytics that can be up to 80 times faster, so again, dramatically better, but yeah, there's still a lot of on-premise systems, so we didn't want to ignore that fact and compare only to cloud products. >> So these are like to like in the sense that they're running the same level of database. You're not playing games in terms of the versioning, obviously, right? >> Actually, we're giving them a lot of the benefit. So we're taking their published numbers that aren't even running a database, and they use these low-level benchmarking tools to generate these numbers. So, we're comparing our full end-to-end database to storage numbers against their low-level IO tool that they've published in their data sheets, so again, we're trying to give them the benefit of the doubt, but we're still orders of magnitude better. >> Okay, now another claim that caught our attention was you said that 87% of the Fortune 100 organizations run Exadata, and you're claiming many thousands of other organizations globally. Can you paint a picture of the ICP, the Ideal Customer Profile for Exadata? What's a typical customer look like, and why do they use Exadata, Juan? >> Yeah, so the ideal customer is pretty straightforward, customers that care about data. That's pretty much it. (Dave laughs) If you care about data, if you care about performance of data, if you care about availability of data, if you care about manageability, if you care about security, those are the customers that should be looking strongly at Exadata, and those are the customers that are adopting Exadata. That's why you mentioned 87% of the global Fortune 100 have already adopted Exadata. If you look at a lot of industries, for example, pretty much every major bank almost in the entire world is running Exadata, and they're running it for their mission critical workloads, things like financial trading, regulatory compliance, user interfaces, the stuff that really matters. But in addition to the biggest companies, we also have thousands of smaller companies that run it for the same reason, because their data matters to them, and it's frankly the best platform, which is why we get chosen by these very, very sophisticated customers over and over again, and why this product has grown to encompass most of the major corporations in the world and governments also. >> Now, I know Deutsche bank is a customer, and I guess now an engineering partner from the announcement that I saw earlier this summer. They're using Cloud@Customer, and they're collaborating on things like security, blockchain, machine intelligence, and my inference is Deutsch Bank is looking to build new products and services that are powered by your platforms. What can you tell us about that? Can you share any insights? Are they going to be using X9M, for example? >> Yes, Deutsche Bank is a partnership that we announced a few months ago. It's a major partnership. Deutsche Bank is one of the biggest banks in the world. They traditionally are an on-premises customer, and what they've announced is they're going to move almost the entire database estate to our Exadata Cloud@Customer platform, so they want to go with a cloud platform, but they're big enough that they want to run it in their own data center for certain regulatory reasons. And so, the announcement that we made with them is they're moving the vast bulk of their data estate to this platform, including their core banking, regulatory applications, so their most critical applications. So, obviously they've done a lot of testing. They've done a lot of trials and they have the confidence to make this major transition to a cloud model with the Exadata Cloud@Customer solution, and we're also working with them to enhance that product and to work in various other fields, like you mentioned, machine learning, blockchain, that kind of project also. So it's a big deal when one of the biggest, most conservative, best respected financial institution in the world says, "We're going all in on this product," that's a big deal. >> Now outside of banking, I know a number of years ago, I stumbled upon an installation or a series of installations that Samsung found out about them as a customer. I believe it's now public, but they've something like 300 Exadatas. So help us understand, is it common that customers are building these kinds of Exadata farms? Is this an outlier? >> Yeah, so we have many large customers that have dozens to hundreds of Exadatas, and it's pretty simple, they start with one or two, and then they see the benefits, themselves, and then it grows. And Samsung is probably the biggest, most successful and most respected electronics company in the world. They are a giant company. They have a lot of different sub units. They do their own manufacturing, so manufacturing's one of their most critical applications, but they have lots of other things they run their Exadata for. So we're very happy to have them as one of our major customers that run Exadata, and by the way, Exadata again, very huge in electronics, in manufacturing. It's not just banking and that kind of stuff. I mean, manufacturing is incredibly critical. If you're a company like Samsung, that's your bread and butter. If your factory stops working, you have huge problems. You can't produce products, and you will want to improve the quality. You want to improve the tracking. You want to improve the customer service, all that requires a huge amount of data. Customers like Samsung are generating terabytes and terabytes of data per day from their manufacturing system. They track every single piece, everything that happens, so again, big deal, they care about data. They care deeply about data. They're a huge Exadata customer. That's kind of the way it works. And they've used it for many years, and their use is growing and growing and growing, and now they're moving to the cloud model as well. >> All right, so we talked about some big customers and Juan, as you know, we've covered Exadata since its inception. We were there at the announcement. We've always stressed the fit in our research with mission critical workloads, which especially resonates with these big customers. My question is how does Exadata resonate with the smaller customer base? >> Yeah, so we talk a lot about the biggest customers, because honestly they have the most critical requirements. But, at some level they have worldwide requirements, so if one of the major financial institutions goes down, it's not just them that's affected, that reverberates through the entire world. There's many other customers that use Exadata. Maybe their application doesn't stop the world, but it stops them, so it's very important to them. And so one of the things that we've introduced in our Cloud@Customer and public cloud Exadata platforms is the ability for Oracle to manage all the infrastructure, which enables smaller customers that don't have as much IT sophistication to adopt these very mission critical technology, so that's one of the big advancements. Now, we've always had smaller customers, but now we're getting more and more. We're getting universities, governments, smaller businesses adopting Exadata, because the cloud model for adopting is dramatically simpler. Oracle does all the administration, all the low-level stuff. They don't have to get involved in it at all. They can just use the data. And, on top of that comes our autonomous database, which makes it even easier for smaller customers to adapt. So Exadata, which some people think of as a very high-end platform in this cloud model, and particularly with autonomous databases is very accessible and very useful for any size customer really. >> Yeah, by all accounts, I wouldn't debate Exadata has been a tremendous success. But you know, a lot of customers, they still prefer to roll their own, do it themselves, and when I talk to them and ask them, "Okay, why is that?" They feel it limits their reliance on a single vendor, and it gives them better ability to build what I call a horizontal infrastructure that can support say non-Oracle workloads, so what do you tell those customers? Why should those customers run Oracle database on Exadata instead of a DIY infrastructure? >> Yeah, so that debate has gone on for a lot of years. And actually, what I see, there's less and less of that debate these days. You know, initially customers, many customers, they were used to building their own. That's kind of what they did. They were pretty good at it. What we have shown customers, and when we talk about these major banks, those are the kinds of people that are really good at it. They have giant IT departments. If you look at a major bank in the world, they have tens of thousands of people in their IT departments. These are gigantic multi-billion dollar organizations, so they were pretty good at this kind of stuff. And what we've shown them is you can't build this yourself. There's so much software that we've written to integrate with the database that you just can't build yourself, it's not possible. It's kind of like trying to build your own smartphone. You really can't do it, the scale, the complexity of the problem. And now as the cloud model comes in, customers are realizing, hey, all this attention to building my own infrastructure, it's kind of last decade, last century. We need to move on to more of an as a service model, so we can focus on our business. Let enterprises that are specialized in infrastructure, like Oracle that are really, really good at it, take care of the low-level details, and let me focus on things that differentiate me as a business. It's not going to differentiate them to establish their own storage for database. That's not a differentiator, and they can't do it nearly as well as we can, and a lot of that is because we write a lot of special technology and software that they just can't do themselves, it's not possible. It's just like you can't build your own smartphone. It's just really not possible. >> Now, another area that we've covered extensively, we were there at the unveiling, as well is ZDLRA, Zero Data Loss Recovery Appliance. We've always liked this product, especially for mission critical workloads, we're near zero data loss, where you can justify that. But while we always saw it as somewhat of a niche market, first of all, is that fair, and what's new with ZDLRA? >> Yeah ZDLRA has been in the market for a number of years. We have some of the biggest corporations in the world running on that, and one of the big benefits has been zero data loss, so again, if you care about data, you can't lose data. You can't restore to last night's backup if something happens. So if you're a bank, you can't restore everybody's data to last night. Suppose you made a deposit during the day. They're like, "Hey, sorry, Mr. Customer, your deposit, "well, we don't have any record of it anymore, "'cause we had to restore to last night's backup," you know, that doesn't work. It doesn't work for airlines. It doesn't work for manufacturing. That whole model is obsolete, so you need zero data loss, and that's why we introduced Zero Data Loss Recovery Appliance, and it's been very successful in the market. In addition to zero data loss, it actually provides much faster restore, much more reliable restores. It's more scalable, so it has a lot of advantages. With our X9M generation, we're introducing several new capabilities. First of all, it has higher capacity, so we can store more backups, keep data for longer. Another thing is we're actually dropping the price of the entry-level configuration of ZDLRA, so it makes it more affordable and more usable by smaller businesses, so that's a big deal. And then the other thing that we're hearing a lot about, and if you read the news at all, you hear a lot about ransomware. This is a major problem for the world, cyber criminals breaking into your network and taking the data ransom. And so we've introduced some, we call cyber vault capabilities in ZDLRA. They help address this ransomware issue that's kind of rampant throughout the world, so everybody's worried about that. There's now regulatory compliance for ransomware that particularly financial institutions have to conform to, and so we're introducing new capabilities in that area as well, which is a big deal. In addition, we now have the ability to have multiple ZDLRAs in a large enterprise, and if something happens to one, we automatically fail over backups to another. We can replicate across them, so it makes it, again, much more resilient with replication across different recovery appliances, so a lot of new improvements there as well. >> Now, is an air gap part of that solution for ransomware? >> No, air gap, you really can't have your back, if you're continuously streaming changes to it, you really can't have an air gap there, but you can protect the data. There's a number of technologies to protect the data. For example, one of the things that a cyber criminal wants to do is they want to take control of your data and then get rid of your backup, so you can't restore them. So as a simple example of one thing we're doing is we're saying, "Hey, once we have the data, "you can't delete it for a certain amount of days." So you might say, "For the 30 days, "I don't care who you are. "I don't care what privileges you have. "I don't care anything, I'm holding onto that data "for at least 30 days," so for example, a cyber criminal can't come in and say, "Hey, I'm going to get into the system "and delete that stuff or encrypt it," or something like that. So that's a simple example of one of the things that the cyber vault does. >> So, even as an administrator, I can't change that policy? >> That's right, that's one of the goals is doesn't matter what privileges you have, you can't change that policy. >> Does that eliminate the need for an air gap or would you not necessarily recommend, would you just have another layer of protection? What's your recommendation on that to customers? >> We always recommend multiple layers of protection, so for example, in our ZDLRA, we support, we offload tape backups directly from the appliance, so a great way to protect the data from any kind of thing is you put it on a tape, and guess what, once that tape drive is filed away, I don't care what cyber criminal you are, if you're remote, you can't access that data. So, we always promote multiple layers, multiple technologies to protect the data, and tape is a great way to do that. We can also now archive. In addition to tape, we can now archive to the public cloud, to our object storage servers. We can archive to what we call our ZFS appliance, which is a very low cost storage appliance, so there's a number of secondary archive copies that we offload and implement for customers. We make it very easy to do that. So, yeah, you want multiple layers of protection. >> Got it, okay, your tape is your ultimate air gap. ZDLRA is your low RPO device. You've got cloud kind of in the middle, maybe that's your cheap and deep solution, so you have some options. >> Juan: Yes. >> Okay, last question. Summarize the announcement, if you had to mention two or three takeaways from the X9M announcement for our audience today, what would you choose to share? >> I mean, it's pretty straightforward. It's the new generation. It's significantly faster for OLTP, for analytics, significantly better consolidation, more cost-effective. That's the big picture. Also there's a lot of software enhancements to make it better, improve the management, make it more usable, make it better disaster recovery. I talked about some of these cyber vault capabilities, so it's improved across all the dimensions and not in small ways, in big ways. We're talking 50% improvement, 80% improvements. That's a big change, and also we're keeping the price the same, so when you get a 50 or 80% improvement, we're not increasing the price to match that, so you're getting much better value as well. And that's pretty much what it is. It's the same product, even better. >> Well, I love this cadence that we're on. We love having you on these video exclusives. We have a lot of Oracle customers in our community, so we appreciate you giving us the inside scope on these announcements. Always a pleasure having you on theCUBE. >> Thanks for having me. It's always fun to be with you, Dave. >> All right, and thank you for watching. This is Dave Vellante for theCUBE, and we'll see you next time. (bright music)

Published Date : Sep 28 2021

SUMMARY :

and databases have to run It's great to be here. of the X9M announcement today? We have the pedal to the metal sauce that allows you to achieve and so we have a lot of that means the hardware sends the new data Flash is a slow tier now. that provides big benefits, and you got a lot of scale here, and everybody loves both of those things, Now in a move that was maybe and we have Exadata that runs on Prem. and Azure SQL, right, and close to a hundred times Why and what were those results? and compare only to cloud products. of the versioning, obviously, right? and they use these of the Fortune 100 and it's frankly the best platform, is looking to build new and to work in various other it common that customers and now they're moving to and Juan, as you know, is the ability for Oracle to and it gives them better ability to build and a lot of that is because we write first of all, is that fair, and so we're introducing new capabilities of one of the things That's right, that's one of the goals In addition to tape, we can now You've got cloud kind of in the middle, from the X9M announcement the price to match that, so we appreciate you It's always fun to be with you, Dave. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SamsungORGANIZATION

0.99+

Deutsche BankORGANIZATION

0.99+

JuanPERSON

0.99+

twoQUANTITY

0.99+

Juan LoaizaPERSON

0.99+

Deutsche bankORGANIZATION

0.99+

DavePERSON

0.99+

September 2021DATE

0.99+

OracleORGANIZATION

0.99+

50 timesQUANTITY

0.99+

thousandsQUANTITY

0.99+

30 daysQUANTITY

0.99+

Deutsch BankORGANIZATION

0.99+

50%QUANTITY

0.99+

30QUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

50QUANTITY

0.99+

80%QUANTITY

0.99+

87%QUANTITY

0.99+

ZDLRAORGANIZATION

0.99+

60%QUANTITY

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

last nightDATE

0.99+

last centuryDATE

0.99+

first tierQUANTITY

0.99+

dozensQUANTITY

0.98+

this yearDATE

0.98+

more than a terabyte per secondQUANTITY

0.98+

RedshiftTITLE

0.97+

ExadataORGANIZATION

0.97+

FirstQUANTITY

0.97+

hundredsQUANTITY

0.97+

X9MTITLE

0.97+

more than a terabyte per secondQUANTITY

0.97+

OutpostsORGANIZATION

0.96+

Azure SQLTITLE

0.96+

Azure StackTITLE

0.96+

zero dataQUANTITY

0.96+

over a dozen yearsQUANTITY

0.96+

Breaking Analysis: Tech Spending Roars Back in 2021


 

>> Narrator: From theCUBE Studios in Palo Alto, in Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> Tech spending is poised to rebound as the economy reopens in 2021. CIOs and IT buyers, they expect a 4% increase in 2021 spending based on ETR's latest surveys. And we believe that number will actually be higher, in the six to 7% range even. The big drivers are continued fine tuning of, and investment in digital strategies, for example, cloud security, AI data and automation. Application modernization initiatives continue to attract attention, and we also expect more support with work from home demand, for instance laptops, et cetera. And we're even seeing pent-up demand for data center infrastructure and other major risks to this scenario, they remain the pace of the reopening, of course, no surprise there, however, even if there are speed bumps to the vaccine rollout and achieving herd immunity, we believe tech spending will grow at least two points faster than GDP, which is currently forecast at 4.1%. Hello and welcome to this week's (indistinct) on Cube Insights powered by ETR. In this breaking analysis, we want to update you on our latest macro view of the market, and then highlight a few key sectors that we've been watching, namely cloud with a particular drill down on Microsoft and AWS, security, database, and then we'll look at Dell and VMware as a proxy for the data center. Now here's a look at what IT buyers and CIOs think. This chart shows the latest survey data from ETR and it compares the December results with the year earlier survey. Consistent with our earlier reporting, we see a kind of a swoosh-like recovery with a slower first half and accelerating in the second half. And we think that CIOs are being prudently conservative, 'cause if GDP grows at 4% plus, we fully expect tech spending to outperform. Now let's look at the factors that really drive some of our thinking on that. This is data that we've shown before it asks buyers if they're initiating any of the following strategies in the coming quarter, in the face of the pandemic and you can see there's no change in work from home, really no change in business travel, but hiring freezes, freezing new deployments, these continue to trend down. New deployments continue to be up, layoffs are trending down and hiring is also up. So these are all good signs. Now having said that, one part of our scenario assumes workers return and the current 75% of employees that work from home will moderate by the second half to around 35%. Now that's double the historical average, and that large percentage, that will necessitate continued work from home infrastructure spend, we think and drive HQ spending as well in the data center. Now the caveat of course is that lots of companies are downsizing corporate headquarters, so that could weigh on this dual investment premise that we have, but generally with the easy compare in these tailwinds, we expect solid growth in this coming year. Now, what sectors are showing growth? Well, the same big four that we've been talking about for 10 months, machine intelligence or AI/ML, RPA and broader automation agendas, these lead the pack along with containers and cloud. These four, you can see here above that red dotted line at 40%, that's a 40% net score which is a measure of spending momentum. Now cloud, it's the most impressive because what you see in this chart is spending momentum or net score in the vertical axis and market share or pervasiveness in the data center on the horizontal axis. Now cloud it stands out, as it's has a large market share and it's got spending velocity tied to it. So, I mean that is really impressive for that sector. Now, what we want to do here is do a quick update on the big three cloud revenue for 2020. And so we're looking back at 2020, and this really updates the chart that we showed last week at our CUBE on Cloud event, the only differences Azure, Microsoft reported and this chart shows IaaS estimates for the big three, we had had Microsoft Azure in Q4 at 6.8 billion, it came in at 6.9 billion based on our cloud model. Now the points we previously made on this chart, they stand out. AWS is the biggest, and it's growing more slowly but it throws off more absolute dollars, Azure grew 48% sent last quarter, we had it slightly lower and so we've adjusted that and that's incredible. And Azure continues to close that gap on AWS and we'll see how AWS and Google do when they report next week. We definitely think based on Microsoft result that AWS has upside to these numbers, especially given the Q4 push, year end, and the continued transition to cloud and even Google we think can benefit. Now what we want to do is take a closer look at Microsoft and AWS and drill down into those two cloud leaders. So take a look at this graphic, it shows ETR's survey data for net score across Microsoft's portfolio, and we've selected a couple of key areas. Virtually every sector is in the green and has forward momentum relative to the October survey. Power Automate, which is RPA, Teams is off the chart, Azure itself we've reported on that, is the linchpin of Microsoft's innovation strategy, serverless, AI analytics, containers, they all have over 60% net scores. Skype is the only dog and Microsoft is doing a fabulous job of transitioning its customers to Teams away from Skype. I think there are still people using Skype. Yes, I know it's crazy. Now let's take a look at the AWS portfolio drill down, there's a similar story here for Amazon and virtually all sectors are well into the 50% net scores or above. Yeah, it's lower than Microsoft, but still AWS, very, very large, so across the board strength for the company and it's impressive for a $45 billion cloud company. Only Chime is lagging behind AWS and maybe, maybe AWS needs a Teams-like version to migrate folks off of Chime. Although you do see it's an uptick there relative to the last survey, but still not burning the house down. Now let's take a look at security. It's a sector that we've highlighted for several quarters, and it's really undergoing massive change. This of course was accelerated by the work from home trend, and this chart ranks the CIO and CSO priorities for security, and here you see identity access management stands out. So this bodes well for the likes of Okta and SailPoint, of course endpoint security also ranks highly, and that's good news for a company like CrowdStrike or Forescout, Carbon Black, which was acquired by VMware. And you can see network security is right there as well, I mean, it's all kind of network security but Cisco, Palo Alto, Fortinet are some of the names that we follow closely there, and cloud security, Microsoft, Amazon and Zscaler also stands out. Now, what we want to do now is drill in a little bit and take a look at the vendor map for security. So this chart shows one of our favorite views, it's getting net score or spending momentum on the vertical axis and market share on the horizontal. Okta, note in the upper right of that little chart there that table, Okta remains the highest net score of all the players that we're showing here, SailPoint and CrowdStrike definitely looming large, Microsoft continues to be impressive because of its both presence, you can see that dot in the upper right there and it's momentum, and you know, for context, we've included some of the legacy names like RSA and McAfee and Symantec, you could see them in the red as is IBM, and then the rest of the pack, they're solidly in the green, we've said this before security remains a priority, it's a very strong market, CIOs and CSOs have to spend on it, they're accelerating that spending, and it's a fragmented space with lots of legitimate players, and it's undergoing a major change, and with the SolarWinds hack, it's on everyone's radar even more than we've seen with earlier high profile breaches, we have some other data that we'll share in the future, on that front, but in the interest of time, we'll press on here. Now, one of the other sectors that's undergoing significant changes, database. And so if you take a look at the latest survey data, so we're showing that same xy-view, the first thing that we call your attention to is Snowflake, and we've been reporting on this company for years now, and sharing ETR data for well over a year. The company continues to impress us with spending momentum, this last survey it increased from 75% last quarter to 83% in the latest survey. This is unbelievable because having now done this for quite some time, many, many quarters, these numbers are historically not sustainable and very rarely do you see that kind of increase from the mid-70s up into the '80s. So now AWS is the other big call out here. This is a company that has become a database powerhouse, and they've done that from a standing start and they've become a leader in the market. Google's momentum is also impressive, especially with it's technical chops, it gets very, very high marks for things like BigQuery, and so you can see it's got momentum, it does not have the presence in the market to the right, that for instance AWS and Microsoft have, and that brings me to Microsoft is also notable, because it's so large and look at the momentum, it's got very, very strong spending momentum as well, so look, this database market it's seeing dramatically different strategies. Take Amazon for example, it's all about the right tool for the right job, they get a lot of different data stores with specialized databases, for different use cases, Aurora for transaction processing, Redshift for analytics, I want a key value store, hey, some DynamoDB, graph database? You got little Neptune, document database? They've got that, they got time series database, so very, very granular portfolio. You got Oracle on the other end of the spectrum. It along with several others are converging capabilities and that's a big trend that we're seeing across the board, into, sometimes we call it a mono database instead of one database fits all. Now Microsoft's world kind of largely revolves around SQL and Azure SQL but it does offer other options. But the big difference between Microsoft and AWS is AWS' approach is really to maximize the granularity in the technical flexibility with fine-grained access to primitives and APIs, that's their philosophy, whereas Microsoft with synapse for example, they're willing to build that abstraction layer as a means of simplifying the experiences. AWS, they've been reluctant to do this, their approach favors optionality and their philosophy is as the market changes, that will give them the ability to move faster. Microsoft's philosophy favors really abstracting that complexity, now that adds overhead, but it does simplify, so these are two very interesting counter poised strategies that we're watching and we think there's room for both, they're just not necessarily one better than the other, it's just different philosophies and different approaches. Now Snowflake for its part is building a data cloud on top of AWS, Google and Azure, so it's another example of adding value by abstracting away the underlying infrastructure complexity and it obviously seems to be working well, albeit at a much smaller scale at this point. Now let's talk a little bit about some of the on-prem players, the legacy players, and we'll use Dell and VMware as proxies for these markets. So what we're showing here in this chart is Dell's net scores across select parts of its portfolio and it's a pretty nice picture for Dell, I mean everything, but Desktop is showing forward momentum relative to previous surveys, laptops continue to benefit from the remote worker trend, in fact, PCs actually grew this year if you saw our spot on Intel last week, PCs had peaked, PC volume at peaked in 2011 and it actually bumped up this year but it's not really, we don't think sustainable, but nonetheless it's been a godsend during the pandemic as data center infrastructure has been softer. Dell's cloud is up and that really comprises a bunch of infrastructure along with some services, so that's showing some strength that both, look at storage and server momentum, they seem to be picking up and this is really important because these two sectors have been lagging for Dell. But this data supports our pent-up demand premise for on-prem infrastructure, and we'll see if the ETR survey which is forward-looking translates into revenue growth for Dell and others like HPE. Now, what about Dell's favorite new toy over at VMware? Let's take a look at that picture for VMware, it's pretty solid. VMware cloud on AWS, we've been reporting on that for several quarters now, it's showing up in the ETR survey and it is well, it's somewhat moderating, it's coming down from very high spending momentum, so it's still, we think very positive. NSX momentum is coming back in the survey, I'm not sure what happened there, but it's been strong, VMware's on-prem cloud with VCF VMware Cloud Foundation, that's strong, Tanzu was a bit surprising because containers are very hot overall, so that's something we're watching, seems to be moderating, maybe the market says okay, you did great VMware, you're embracing containers, but Tanzu is maybe not the, we'll see, we'll see how that all plays out. I think it's the right strategy for VMware to embrace that container strategy, but we said remember, everybody said containers are going to kill VMware, well, VMware rightly, they've embraced cloud with VMware cloud on AWS, they're embracing containers. So we're seeing much more forward-thinking strategies and management philosophies. Carbon Black, that benefits from the security tailwind, and then the core infrastructure looks good, vSAN, vSphere and VDI. So the big thing that we're watching for VMware, is of course, who's going to be the next CEO. Is it going to be Zane Rowe, who's now the acting CEO? And of course he's been the CFO for years. Who's going to get that job? Will it be Sanjay Poonen? The choice I think is going to say much about the direction of VMware going forward in our view. Succeeding Pat Gelsinger is like, it's going to be like following Peyton Manning at QB, but this summer we expect Dell to spin out VMware or do some other kind of restructuring, and restructure both VMware and Dell's balance sheet, it wants to get both companies back to investment grade and it wants to set a new era in motion or it's going to set a new era in motion. Now that financial transaction, maybe it does call for a CFO in favor of such a move and can orchestrate such a move, but certainly Sanjay Poonen has been a loyal soldier and he's performed very well in his executive roles, not just at VMware, but previous roles, SAP and others. So my opinion there's no doubt he's ready and he's earned it, and with, of course with was no offense to Zane Rowe by the way, he's an outstanding executive too, but the big questions for Dell and VMware's what will the future of these two companies look like? They've dominated, VMware especially has dominated the data center for a decade plus, they're responding to cloud, and some of these new trends, they've made tons of acquisitions and Gelsinger has orchestrated TAM expansion. They still got to get through paying down the debt so they can really double down on an innovation agenda from an R&D perspective, that's been somewhat hamstrung and to their credit, they've done a great job of navigating through Dell's tendency to take VMware cash and restructure its business to go public, and now to restructure both companies to do the pivotal acquisition, et cetera, et cetera, et cetera and clean up it's corporate structure. So it's been a drag on VMware's ability to use its free cash flow for R&D, and again it's been very impressive what it's been able to accomplish there. On the Dell side of the house, it's R&D largely has gone to kind of new products, follow-on products and evolutionary kind of approach, and it would be nice to see Dell be able to really double down on the innovation agenda especially with the looming edge opportunity. Look R&D is the lifeblood of a tech company, and there's so many opportunities across the clouds and at The Edge we've talked this a lot, I haven't talked much about or any about IBM, we wrote a piece last year on IBM's innovation agenda, really hinges on its R&D. It seems to be continuing to favor dividends and stock buybacks, that makes it difficult for the company to really invest in its future and grow, its promised growth, Ginni Rometty promised growth, that never really happened, Arvind Krishna is now promising growth, hopefully it doesn't fall into the same pattern of missed promises, and my concern there is that R&D, you can't just flick a switch and pour money and get a fast return, it takes years to get that. (Dave chuckles) We talked about Intel last week, so similar things going on, but I digress. Look, these guys are going to require in my view, VMware, Dell, I'll put HPE in there, they're going to require organic investment to get back to growth, so we're watching these factors very, very closely. Okay, got to wrap up here, so we're seeing IT spending growth coming in as high as potentially 7% this year, and it's going to be powered by the same old culprits, cloud, AI, automation, we'll be doing an RPA update soon here, application modernization, and the new work paradigm that we think will force increased investments in digital initiatives. The doubling of the expectation of work from home is significant, and so we see this hybrid world, not just hybrid cloud but hybrid work from home and on-prem, this new digital world, and it's going to require investment in both cloud and on-prem, and we think that's going to lift both boats but cloud, clearly the big winner. And we're not by any means suggesting that their growth rates are going to somehow converge, they're not, cloud will continue to outpace on-prem by several hundred basis points, throughout the decade we think. And AWS and Microsoft are in the top division of that cloud bracket. Security markets are really shifting and we continue to like the momentum of companies in identity and endpoint and cloud security, especially the pure plays like CrowdStrike and Okta and SailPoint, and Zscaler and others that we've mentioned over the past several quarters, but CSOs tell us they want to work with the big guys too, because they trust them, especially Palo Alto networks, Cisco obviously in the mix, their security business continues to outperform the balance of Cisco's portfolio, and these companies, they have resources to withstand market shifts and we'll do a deeper drill down at the security soon and update you on other trends, on other companies in that space. Now the database world, it continues to heat up, I used to say on theCUBE all the time that decade and a half ago database was boring and now database is anything but, and thank you to cloud databases and especially Snowflake, it's data cloud vision, it's simplicity, we're seeing lots of different ways though, to skin the cat, and while there's disruption, we believe Oracle's position is solid because it owns Mission-Critical, that's its stronghold, and we really haven't seen those workloads migrate into the cloud, and frankly, I think it's going to be hard to rest those away from Oracle. Now, AWS and Microsoft, they continue to be the easy choice for a lot of their customers. Microsoft migrating its software state, AWS continues to innovate, we've got a lot of database choices, the right tool for the right job, so there's lots of innovation going on in databases beyond these names as well, and we'll continue to update you on these markets shortly. Now, lastly, it's quite notable how well some of the legacy names have navigated through COVID. Sure, they're not rocketing like many of the work-from-home stocks, but they've been able to thus far survive, and in the example of Dell and VMware, the portfolio diversity has been a blessing. The bottom line is the first half of 2021 seems to be shaping up as we expected, momentum for the strongest digital plays, low interest rates helping large established companies hang in there with strong balance sheets, and large customer bases. And what will be really interesting to see is what happens coming out of the pandemic. Will the rich get richer? Yeah, well we think so. But we see the legacy players adjusting their business models, embracing change in the market and steadily moving forward. And we see at least a dozen new players hitting the radar that could become leaders in the coming decade, and as always, we'll be highlighting many of those in our future episodes. Okay, that's it for now, listen, these episodes remember, they're all available as podcasts, all you got to do is search for Breaking Analysis Podcasts and you'll you'll get them so please listen, like them, if you like them, share them, really, I always appreciate that, I publish weekly on wikibon.com and siliconangle.com, and really would appreciate your comments and always do in my LinkedIn posts, or you can always DM me @dvellante or email me at david.vellante@siliconangle.com, and tell me what you think is happening out there. Don't forget to check out ETR+ for all the survey action, this is David Vellante, thanks for watching theCUBE Insights powered by ETR. Stay safe, we'll see you next time. (downbeat music)

Published Date : Jan 29 2021

SUMMARY :

Studios in Palo Alto, in Boston, and in the example of Dell and VMware,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sanjay PoonenPERSON

0.99+

VMwareORGANIZATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DellORGANIZATION

0.99+

sixQUANTITY

0.99+

GoogleORGANIZATION

0.99+

2011DATE

0.99+

Zane RowePERSON

0.99+

IBMORGANIZATION

0.99+

40%QUANTITY

0.99+

DecemberDATE

0.99+

75%QUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

McAfeeORGANIZATION

0.99+

OctoberDATE

0.99+

FortinetORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

David VellantePERSON

0.99+

Pat GelsingerPERSON

0.99+

4.1%QUANTITY

0.99+

AWS'ORGANIZATION

0.99+

4%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

SymantecORGANIZATION

0.99+

Peyton ManningPERSON

0.99+

48%QUANTITY

0.99+

$45 billionQUANTITY

0.99+

50%QUANTITY

0.99+

2020DATE

0.99+

2021DATE

0.99+

second halfQUANTITY

0.99+

7%QUANTITY

0.99+

next weekDATE

0.99+

Ginni RomettyPERSON

0.99+

last weekDATE

0.99+

twoQUANTITY

0.99+

10 monthsQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

Breaking Analysis: Best of theCUBE on Cloud


 

>> Narrator: From theCUBE Studios in Palo Alto, in Boston bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> The next 10 years of cloud, they're going to differ dramatically from the past decade. The early days of cloud, deployed virtualization of standard off-the-shelf components, X86 microprocessors, disk drives et cetera, to then scale out and build a large distributed system. The coming decade is going to see a much more data-centric, real-time, intelligent, call it even hyper-decentralized cloud that will comprise on-prem, hybrid, cross-cloud and edge workloads with a services layer that will obstruct the underlying complexity of the infrastructure which will also comprise much more custom and varied components. This was a key takeaway of the guests from theCUBE on Cloud, an event hosted by SiliconANGLE on theCUBE. Welcome to this week's Wikibon CUBE Insights Powered by ETR. In this episode, we'll summarize the findings of our recent event and extract the signal from our great guests with a couple of series and comments and clips from the show. CUBE on Cloud is our very first virtual editorial event. It was designed to bring together our community in an open forum. We ran the day on our 365 software platform and had a great lineup of CEOs, CIOs, data practitioners technologists. We had cloud experts, analysts and many opinion leaders all brought together in a day long series of sessions that we developed in order to unpack the future of cloud computing in the coming decade. Let me briefly frame up the conversation and then turn it over to some of our guests. First, we put forth our view of how modern cloud has evolved and where it's headed. This graphic that we're showing here, talks about the progression of cloud innovation over time. A cloud like many innovations, it started as a novelty. When AWS announced S3 in March of 2006, nobody in the vendor or user communities really even in the trade press really paid too much attention to it. Then later that year, Amazon announced EC2 and people started to think about a new model of computing. But it was largely tire kickers, bleeding-edge developers that took notice and really leaned in. Now the financial crisis of 2007 to 2009, really created what we call a cloud awakening and it put cloud on the radar of many CFOs. Shadow IT emerged within departments that wanted to take IT in bite-sized chunks and along with the CFO wanted to take it as OPEX versus CAPEX. And then I teach transformation that really took hold. We came out of the financial crisis and we've been on an 11-year cloud boom. And it doesn't look like it's going to stop anytime soon, cloud has really disrupted the on-prem model as we've reported and completely transformed IT. Ironically, the pandemic hit at the beginning of this decade, and created a mandate to go digital. And so it accelerated the industry transformation that we're highlighting here, which probably would have taken several more years to mature but overnight the forced March to digital happened. And it looks like it's here to stay. Now the next wave, we think we'll be much more about business or industry transformation. We're seeing the first glimpses of that. Holger Mueller of Constellation Research summed it up at our event very well I thought, he basically said the cloud is the big winner of COVID. Of course we know that now normally we talk about seven-year economic cycles. He said he was talking about for planning and investment cycles. Now we operate in seven-day cycles. The examples he gave where do we open or close the store? How do we pivot to support remote workers without the burden of CAPEX? And we think that the things listed on this chart are going to be front and center in the coming years, data AI, a fully digitized and intelligence stack that will support next gen disruptions in autos, manufacturing, finance, farming and virtually every industry where the system will expand to the edge. And the underlying infrastructure across physical locations will be hidden. Many issues remain, not the least of which is latency which we talked about at the event in quite some detail. So let's talk about how the Big 3 cloud players are going to participate in this next era. Well, in short, the consensus from the event was that the rich get richer. Let's take a look at some data. This chart shows our most recent estimates of IaaS and PaaS spending for the Big 3. And we're going to update this after earning season but there's a couple of points stand out. First, we want to make the point that combined the Big 3 now account for almost $80 billion of infrastructure spend last year. That $80 billion, was not all incremental (laughs) No it's caused consolidation and disruption in the on-prem data center business and within IT shops companies like Dell, HPE, IBM, Oracle many others have felt the heat and have had to respond with hybrid and cross cloud strategies. Second while it's true that Azure and GCP they appear to be growing faster than AWS. We don't know really the exact numbers, of course because only AWS provides a clean view of IaaS and passwords, Microsoft and Google. They kind of hide them all ball on their numbers which by the way, I don't blame them but they do leave breadcrumbs and clues on growth rates. And we have other means of estimating through surveys and the like, but it's undeniable Azure is closing the revenue gap on AWS. The third is that I like the fact that Azure and Google are growing faster than AWS. AWS is the only company by our estimates to grow its business sequentially last quarter. And in and of itself, that's not really enough important. What is significant is that because AWS is so large now at 45 billion, even at their slower growth rates it grows much more in absolute terms than its competitors. So we think AWS is going to keep its lead for some time. We think Microsoft and AWS will continue to lead the pack. You know, they might converge maybe it will be a 200 just race in terms of who's first who's second in terms of cloud revenue and how it's counted depending on what they count in their numbers. And Google look with its balance sheet and global network. It's going to play the long game and virtually everyone else with the exception of perhaps Alibaba is going to be secondary players on these platforms. Now this next graphic underscores that reality and kind of lays out the competitive landscape. What we're showing here is survey data from ETR of more than 1400 CIOs and IT buyers and on the vertical axis is Net Score which measures spending momentum on the horizontal axis is so-called Market Share which is a measure of pervasiveness in the data set. The key points are AWS and Microsoft look at it. They stand alone so far ahead of the pack. I mean, they really literally, it would have to fall down to lose their lead high spending velocity and large share of the market or the hallmarks of these two companies. And we don't think that's going to change anytime soon. Now, Google, even though it's far behind they have the financial strength to continue to position themselves as an alternative to AWS. And of course, an analytics specialist. So it will continue to grow, but it will be challenged. We think to catch up to the leaders. Now take a look at the hybrid zone where the field is playing. These are companies that have a large on-prem presence and have been forced to initiate a coherent cloud strategy. And of course, including multicloud. And we include Google in this so pack because they're behind and they have to take a differentiated approach relative to AWS, and maybe cozy up to some of these traditional enterprise vendors to help Google get to the enterprise. And you can see from the on-prem crowd, VMware Cloud on AWS is stands out as having some, some momentum as does Red Hat OpenShift, which is it's cloudy, but it's really sort of an ingredient it's not really broad IaaS specifically but it's a component of cloud VMware cloud which includes VCF or VMware Cloud Foundation. And even Dell's cloud. We would expect HPE with its GreenLake strategy. Its financials is shoring up, should be picking up momentum in the future in terms of what the customers of this survey consider cloud. And then of course you could see IBM and Oracle you're in the game, but they don't have the spending momentum and they don't have the CAPEX chops to compete with the hyperscalers IBM's cloud revenue actually dropped 7% last quarter. So that highlights the challenges that that company facing Oracle's cloud business is growing in the single digits. It's kind of up and down, but again underscores these two companies are really about migrating their software install basis to their captive clouds and as well for IBM, for example it's launched a financial cloud as a way to differentiate and not take AWS head-on an infrastructure as a service. The bottom line is that other than the Big 3 in Alibaba the rest of the pack will be plugging into hybridizing and cross-clouding those platforms. And there are definitely opportunities there specifically related to creating that abstraction layer that we talked about earlier and hiding that underlying complexity and importantly creating incremental value good examples, snowfallLike what snowflake is doing with its data cloud, what the data protection guys are doing. A company like Loomio is headed in that direction as are others. So, you keep an eye on that and think about where the white space is and where the value can be across-clouds. That's where the opportunity is. So let's see, what is this all going to look like? How does the cube community think it's going to unfold? Let's hear from theCUBE Guests and theCUBE on Cloud speakers and some of those highlights. Now, unfortunately we don't have time to show you clips from every speaker. We are like 10-plus hours of video content but we've tried to pull together some comments that summarize the sentiment from the community. So I'm going to have John Furrier briefly explain what theCUBE on Cloud is all about and then let the guests speak for themselves. After John, Pradeep Sindhu is going to give a nice technical overview of how the cloud was built out and what's changing in the future. I'll give you a hint it has to do with data. And then speaking of data, Mai-Lan Bukovec, who heads up AWS is storage portfolio. She'll explain how she views the coming changes in cloud and how they look at storage. Again, no surprise, it's all about data. Now, one of the themes that you'll hear from guests is the notion of a distributed cloud model. And Zhamak Deghani, he was a data architect. She'll explain her view of the future of data architectures. We also have thoughts from analysts like Zeus Karavalla and Maribel Lopez, and some comments from both Microsoft and Google to compliment AWS's view of the world. In fact, we asked JG Chirapurath from Microsoft to comment on the common narrative that Microsoft products are not best-to-breed. They put out a one dot O and then they get better, or sometimes people say, well, they're just good enough. So we'll see what his response is to that. And Paul Gillin asks, Amit Zavery of Google his thoughts on the cloud leaderboard and how Google thinks about their third-place position. Dheeraj Pandey gives his perspective on how technology has progressed and been miniaturized over time. And what's coming in the future. And then Simon Crosby gives us a framework to think about the edge as the most logical opportunity to process data not necessarily a physical place. And this was echoed by John Roese, and Chris Wolf to experience CTOs who went into some great depth on this topic. Unfortunately, I don't have the clips of those two but their comments can be found on the CTO power panel the technical edge it's called that's the segment at theCUBE on Cloud events site which we'll share the URL later. Now, the highlight reel ends with CEO Joni Klippert she talks about the changes in securing the cloud from a developer angle. And finally, we wrap up with a CIO perspective, Dan Sheehan. He provides some practical advice on building on his experience as a CIO, COO and CTO specifically how do you as a business technology leader deal with the rapid pace of change and still be able to drive business results? Okay, so let's now hear from the community please run the highlights. >> Well, I think one of the things we talked about COVID is the personal impact to me but other people as well one of the things that people are craving right now is information, factual information, truth, textures that we call it. But here this event for us Dave is our first inaugural editorial event. Rob, both Kristen Nicole the entire cube team, SiliconANGLE on theCUBE we're really trying to put together more of a cadence. We're going to do more of these events where we can put out and feature the best people in our community that have great fresh voices. You know, we do interview the big names Andy Jassy, Michael Dell, the billionaires of people making things happen, but it's often the people under them that are the real Newsmakers. >> If you look at the architecture of cloud data centers the single most important invention was scale-out. Scale-out of identical or near identical servers all connected to a standard IP ethernet network. That's the architecture. Now the building blocks of this architecture is ethernet switches which make up the network, IP ethernet switches. And then the server is all built using general purpose x86 CPU's with DRAM, with SSD, with hard drives all connected to inside the CPU. Now, the fact that you scale these server nodes as they're called out was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose compute but this architecture, Dave is a compute centric architecture. And the reason it's a compute centric architecture is if you open this, is server node. What you see is a connection to the network typically with a simple network interface card. And then you have CPU's which are in the middle of the action. Not only are the CPU's processing the application workload but they're processing all of the IO workload what we call data centric workload. And so when you connect SSDs and hard drives and GPU is everything to the CPU, as well as to the network you can now imagine that the CPU is doing two functions. It's running the applications but it's also playing traffic cop for the IO. So every IO has to go to the CPU and you're executing instructions typically in the operating system. And you're interrupting the CPU many many millions of times a second. Now general purpose CPU and the architecture of the CPU's was never designed to play traffic cop because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's critical that in this new architecture where does a lot of data, a lot of these stress traffic the percentage of workload, which is data centric has gone from maybe one to 2% to 30 to 40%. >> The path to innovation is paved by data. If you don't have data, you don't have machine learning you don't have the next generation of analytics applications that helps you chart a path forward into a world that seems to be changing every week. And so in order to have that insight in order to have that predictive forecasting that every company needs, regardless of what industry that you're in today, it all starts from data. And I think the key shift that I've seen is how customers are thinking about that data, about being instantly usable. Whereas in the past, it might've been a backup. Now it's part of a data Lake. And if you can bring that data into a data lake you can have not just analytics or machine learning or auditing applications it's really what does your application do for your business and how can it take advantage of that vast amount of shared data set in your business? >> We are actually moving towards decentralization if we think today, like if it let's move data aside if we said is the only way web would work the only way we get access to various applications on the web or pages to centralize it We would laugh at that idea. But for some reason we don't question that when it comes to data, right? So I think it's time to embrace the complexity that comes with the growth of number of sources, the proliferation of sources and consumptions models, embrace the distribution of sources of data that they're not just within one part of organization. They're not just within even bounds of organizations that are beyond the bounds of organization. And then look back and say, okay, if that's the trend of our industry in general, given the fabric of compensation and data that we put in, you know, globally in place then how the architecture and technology and organizational structure incentives need to move to embrace that complexity. And to me that requires a paradigm shift a full stack from how we organize our organizations how we organize our teams, how we put a technology in place to look at it from a decentralized angle. >> I actually think we're in the midst of the transition to what's called a distributed cloud, where if you look at modernized cloud apps today they're actually made up of services from different clouds. And also distributed edge locations. And that's going to have a pretty profound impact on the way we go vast. >> We wake up every day, worrying about our customer and worrying about the customer condition and to absolutely make sure we dealt with the best in the first attempt that we do. So when you take the plethora of products we've dealt with in Azure, be it Azure SQL be it Azure cosmos DB, Synapse, Azure Databricks, which we did in partnership with Databricks Azure machine learning. And recently when we sort of offered the world's first comprehensive data governance solution and Azure overview, I would, I would humbly submit to you that we are leading the way. >> How important are rankings within the Google cloud team or are you focused mainly more on growth and just consistency? >> No, I don't think again, I'm not worried about we are not focused on ranking or any of that stuff. Typically I think we are worried about making sure customers are satisfied and the adding more and more customers. So if you look at the volume of customers we are signing up a lot of the large deals we did doing. If you look at the announcement we've made over the last year has been tremendous momentum around that. >> The thing that is really interesting about where we have been versus where we're going is we spend a lot of time talking about virtualizing hardware and moving that around. And what does that look like? And creating that as more of a software paradigm. And the thing we're talking about now is what does cloud as an operating model look like? What is the manageability of that? What is the security of that? What, you know, we've talked a lot about containers and moving into different, DevSecOps and all those different trends that we've been talking about. Like now we're doing them. So we've only gotten to the first crank of that. And I think every technology vendor we talked to now has to address how are they are going to do a highly distributed management insecurity landscape? Like, what are they going to layer on top of that? Because it's not just about, oh, I've taken a rack of something, server storage, compute, and virtualized it. I know have to create a new operating model around it in a way we're almost redoing what the OSI stack looks like and what the software and solutions are for that. >> And the whole idea of we in every recession we make things smaller. You know, in 91 we said we're going to go away from mainframes into Unix servers. And we made the unit of compute smaller. Then in the year, 2000 windows the next bubble burst and the recession afterwards we moved from Unix servers to Wintel windows and Intel x86 and eventually Linux as well. Again, we made things smaller going from million dollar servers to $5,000 servers, shorter lib servers. And that's what we did in 2008, 2009. I said, look, we don't even need to buy servers. We can do things with virtual machines which are servers that are an incarnation in the digital world. There's nothing in the physical world that actually even lives but we made it even smaller. And now with cloud in the last three, four years and what will happen in this coming decade. They're going to make it even smaller not just in space, which is size, with functions and containers and virtual machines, but also in time. >> So I think the right way to think about edges where can you reasonably process the data? And it obviously makes sense to process data at the first opportunity you have but much data is encrypted between the original device say and the application. And so edge as a place doesn't make as much sense as edge as an opportunity to decrypt and analyze it in the care. >> When I think of Shift-left, I think of that Mobius that we all look at all of the time and how we deliver and like plan, write code, deliver software, and then manage it, monitor it, right like that entire DevOps workflow. And today, when we think about where security lives, it either is a blocker to deploying production or most commonly it lives long after code has been deployed to production. And there's a security team constantly playing catch up trying to ensure that the development team whose job is to deliver value to their customers quickly, right? Deploy as fast as we can as many great customer facing features. They're then looking at it months after software has been deployed and then hurrying and trying to assess where the bugs are and trying to get that information back to software developers so that they can fix those issues. Shifting left to me means software engineers are finding those bugs as they're writing code or in the CIC CD pipeline long before code has been deployed to production. >> During this for quite a while now, it still comes down to the people. I can get the technology to do what it needs to do as long as they have the right requirements. So that goes back to people making sure we have the partnership that goes back to leadership and the people and then the change management aspects right out of the gate, you should be worrying about how this change is going to be how it's going to affect, and then the adoption and an engagement, because adoption is critical because you can go create the best thing you think from a technology perspective. But if it doesn't get used correctly, it's not worth the investment. So I agree, what is a digital transformation or innovation? It still comes down to understand the business model and injecting and utilizing technology to grow our reduce costs, grow the business or reduce costs. >> Okay, so look, there's so much other content on theCUBE on Cloud events site we'll put the link in the description below. We have other CEOs like Kathy Southwick and Ellen Nance. We have the CIO of UI path. Daniel Dienes talks about automation in the cloud and Appenzell from Anaplan. And a plan is not her company. By the way, Dave Humphrey from Bain also talks about his $750 million investment in Nutanix. Interesting, Rachel Stevens from red monk talks about the future of software development in the cloud and CTO, Hillary Hunter talks about the cloud going vertical into financial services. And of course, John Furrier and I along with special guests like Sergeant Joe Hall share our take on key trends, data and perspectives. So right here, you see the coupon cloud. There's a URL, check it out again. We'll, we'll pop this URL in the description of the video. So there's some great content there. I want to thank everybody who participated and thank you for watching this special episode of theCUBE Insights Powered by ETR. This is Dave Vellante and I'd appreciate any feedback you might have on how we can deliver better event content for you in the future. We'll be doing a number of these and we look forward to your participation and feedback. Thank you, all right, take care, we'll see you next time. (upbeat music)

Published Date : Jan 22 2021

SUMMARY :

bringing you data-driven and kind of lays out the about COVID is the personal impact to me and GPU is everything to the Whereas in the past, it the only way we get access on the way we go vast. and to absolutely make sure we dealt and the adding more and more customers. And the thing we're talking And the whole idea and analyze it in the care. or in the CIC CD pipeline long before code I can get the technology to of software development in the cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Daniel DienesPERSON

0.99+

Zhamak DeghaniPERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

John RoesePERSON

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Andy JassyPERSON

0.99+

DellORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Rachel StevensPERSON

0.99+

Maribel LopezPERSON

0.99+

Michael DellPERSON

0.99+

$5,000QUANTITY

0.99+

Chris WolfPERSON

0.99+

2008DATE

0.99+

Joni KlippertPERSON

0.99+

seven-dayQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Dan SheehanPERSON

0.99+

Pradeep SindhuPERSON

0.99+

Dheeraj PandeyPERSON

0.99+

March of 2006DATE

0.99+

RobPERSON

0.99+

Hillary HunterPERSON

0.99+

GoogleORGANIZATION

0.99+

Amit ZaveryPERSON

0.99+

Ellen NancePERSON

0.99+

JG ChirapurathPERSON

0.99+

John FurrierPERSON

0.99+

Dave HumphreyPERSON

0.99+

Simon CrosbyPERSON

0.99+

Mai-Lan BukovecPERSON

0.99+

2009DATE

0.99+

$80 billionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

AlibabaORGANIZATION

0.99+

JohnPERSON

0.99+

11-yearQUANTITY

0.99+

Kristen NicolePERSON

0.99+

DatabricksORGANIZATION

0.99+

LoomioORGANIZATION

0.99+

BostonLOCATION

0.99+

10-plus hoursQUANTITY

0.99+

45 billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

$750 millionQUANTITY

0.99+

7%QUANTITY

0.99+

Holger MuellerPERSON

0.99+

DavePERSON

0.99+

FirstQUANTITY

0.99+

John FurrierPERSON

0.99+

thirdQUANTITY

0.99+

two companiesQUANTITY

0.99+

SecondQUANTITY

0.99+

firstQUANTITY

0.99+

Zeus KaravallaPERSON

0.99+

last yearDATE

0.99+

Kathy SouthwickPERSON

0.99+

secondQUANTITY

0.99+

Constellation ResearchORGANIZATION

0.99+

JG Chirapurath, Microsoft CLEAN


 

>> Okay, we're now going to explore the vision of the future of cloud computing from the perspective of one of the leaders in the field, JG Chirapurath is the Vice President of Azure Data AI and Edge at Microsoft. JG, welcome to theCUBE on Cloud, thanks so much for participating. >> Well, thank you, Dave. And it's a real pleasure to be here with you and just want to welcome the audience as well. >> Well, JG, judging from your title, we have a lot of ground to cover and our audience is definitely interested in all the topics that are implied there. So let's get right into it. We've said many times in theCUBE that the new innovation cocktail comprises machine intelligence or AI applied to troves of data with the scale of the cloud. It's no longer we're driven by Moore's law. It's really those three factors and those ingredients are going to power the next wave of value creation in the economy. So first, do you buy into that premise? >> Yes, absolutely. We do buy into it and I think one of the reasons why we put data analytics and AI together, is because all of that really begins with the collection of data and managing it and governing it, unlocking analytics in it. And we tend to see things like AI, the value creation that comes from AI as being on that continuum of having started off with really things like analytics and proceeding to be machine learning and the use of data in interesting ways. >> Yes, I'd like to get some more thoughts around data and how you see the future of data and the role of cloud and maybe how Microsoft strategy fits in there. I mean, your portfolio, you've got SQL Server, Azure SQL, you got Arc which is kind of Azure everywhere for people that aren't familiar with that you got Synapse which course does all the integration, the data warehouse and it gets things ready for BI and consumption by the business and the whole data pipeline. And then all the other services, Azure Databricks, you got you got Cosmos in there, you got Blockchain, you've got Open Source services like PostgreSQL and MySQL. So lots of choices there. And I'm wondering, how do you think about the future of cloud data platforms? It looks like your strategy is right tool for the right job. Is that fair? >> It is fair, but it's also just to step back and look at it. It's fundamentally what we see in this market today, is that customers they seek really a comprehensive proposition. And when I say a comprehensive proposition it is sometimes not just about saying that, "Hey, listen "we know you're a sequence of a company, "we absolutely trust that you have the best "Azure SQL database in the cloud. "But tell us more." We've got data that is sitting in Hadoop systems. We've got data that is sitting in PostgreSQL, in things like MongoDB. So that open source proposition today in data and data management and database management has become front and center. So our real sort of push there is when it comes to migration management modernization of data to present the broadest possible choice to our customers, so we can meet them where they are. However, when it comes to analytics, one of the things they ask for is give us lot more convergence use. It really, it isn't about having 50 different services. It's really about having that one comprehensive service that is converged. That's where things like Synapse fits in where you can just land any kind of data in the lake and then use any compute engine on top of it to drive insights from it. So fundamentally, it is that flexibility that we really sort of focus on to meet our customers where they are. And really not pushing our dogma and our beliefs on it but to meet our customers according to the way they've deployed stuff like this. >> So that's great. I want to stick on this for a minute because when I have guests on like yourself they never want to talk about the competition but that's all we ever talk about. And that's all your customers ever talk about. Because the counter to that right tool for the right job and that I would say is really kind of Amazon's approach is that you got the single unified data platform, the mega database. So it does it all. And that's kind of Oracle's approach. It sounds like you want to have your cake and eat it too. So you got the right tool with the right job approach but you've got an integration layer that allows you to have that converged database. I wonder if you could add color to that and confirm or deny what I just said. >> No, that's a very fair observation but I'd say there's a nuance in what I sort of described. When it comes to data management, when it comes to apps, we have then customers with the broadest choice. Even in that perspective, we also offer convergence. So case in point, when you think about cosmos DB under that one sort of service, you get multiple engines but with the same properties. Right, global distribution, the five nines availability. It gives customers the ability to basically choose when they have to build that new cloud native app to adopt cosmos DB and adopt it in a way that is an choose an engine that is most flexible to them. However, when it comes to say, writing a SequenceServer for example, if modernizing it, you want sometimes, you just want to lift and shift it into things like IS. In other cases, you want to completely rewrite it. So you need to have the flexibility of choice there that is presented by a legacy of what sits on premises. When you move into things like analytics, we absolutely believe in convergence. So we don't believe that look, you need to have a relational data warehouse that is separate from a Hadoop system that is separate from say a BI system that is just, it's a bolt-on. For us, we love the proposition of really building things that are so integrated that once you land data, once you prep it inside the Lake you can use it for analytics, you can use it for BI, you can use it for machine learning. So I think, our sort of differentiated approach speaks for itself there. >> Well, that's interesting because essentially again you're not saying it's an either or, and you see a lot of that in the marketplace. You got some companies you say, "No, it's the data lake." And others say "No, no, put it in the data warehouse." And that causes confusion and complexity around the data pipeline and a lot of cutting. And I'd love to get your thoughts on this. A lot of customers struggle to get value out of data and specifically data product builders are frustrated that it takes them too long to go from, this idea of, hey, I have an idea for a data service and it can drive monetization, but to get there you got to go through this complex data life cycle and pipeline and beg people to add new data sources and do you feel like we have to rethink the way that we approach data architecture? >> Look, I think we do in the cloud. And I think what's happening today and I think the place where I see the most amount of rethink and the most amount of push from our customers to really rethink is the area of analytics and AI. It's almost as if what worked in the past will not work going forward. So when you think about analytics only in the enterprise today, you have relational systems, you have Hadoop systems, you've got data marts, you've got data warehouses you've got enterprise data warehouse. So those large honking databases that you use to close your books with. But when you start to modernize it, what people are saying is that we don't want to simply take all of that complexity that we've built over, say three, four decades and simply migrate it en masse exactly as they are into the cloud. What they really want is a completely different way of looking at things. And I think this is where services like Synapse completely provide a differentiated proposition to our customers. What we say there is land the data in any way you see, shape or form inside the lake. Once you landed inside the lake, you can essentially use a Synapse Studio to prep it in the way that you like. Use any compute engine of your choice and operate on this data in any way that you see fit. So case in point, if you want to hydrate a relational data warehouse, you can do so. If you want to do ad hoc analytics using something like Spark, you can do so. If you want to invoke Power BI on that data or BI on that data, you can do so. If you want to bring in a machine learning model on this prep data, you can do so. So inherently, so when customers buy into this proposition, what it solves for them and what it gives to them is complete simplicity. One way to land the data multiple ways to use it. And it's all integrated. >> So should we think of Synapse as an abstraction layer that abstracts away the complexity of the underlying technology? Is that a fair way to think about it? >> Yeah, you can think of it that way. It abstracts away Dave, a couple of things. It takes away that type of data. Sort of complexities related to the type of data. It takes away the complexity related to the size of data. It takes away the complexity related to creating pipelines around all these different types of data. And fundamentally puts it in a place where it can be now consumed by any sort of entity inside the Azure proposition. And by that token, even Databricks. You can in fact use Databricks in sort of an integrated way with the Azure Synapse >> Right, well, so that leads me to this notion of and I wonder if you buy into it. So my inference is that a data warehouse or a data lake could just be a node inside of a global data mesh. And then it's Synapse is sort of managing that technology on top. Do you buy into that? That global data mesh concept? >> We do and we actually do see our customers using Synapse and the value proposition that it brings together in that way. Now it's not where they start, oftentimes when a customer comes and says, "Look, I've got an enterprise data warehouse, "I want to migrate it." Or "I have a Hadoop system, I want to migrate it." But from there, the evolution is absolutely interesting to see. I'll give you an example. One of the customers that we're very proud of is FedEx. And what FedEx is doing is it's completely re-imagining its logistics system. That basically the system that delivers, what is it? The 3 million packages a day. And in doing so, in this COVID times, with the view of basically delivering on COVID vaccines. One of the ways they're doing it, is basically using Synapse. Synapse is essentially that analytic hub where they can get complete view into the logistic processes, way things are moving, understand things like delays and really put all of that together in a way that they can essentially get our packages and these vaccines delivered as quickly as possible. Another example, it's one of my favorite. We see once customers buy into it, they essentially can do other things with it. So an example of this is really my favorite story is Peace Parks initiative. It is the premier of white rhino conservancy in the world. They essentially are using data that has landed in Azure, images in particular to basically use drones over the vast area that they patrol and use machine learning on this data to really figure out where is an issue and where there isn't an issue. So that this part with about 200 radios can scramble surgically versus having to range across the vast area that they cover. So, what you see here is, the importance is really getting your data in order, landing consistently whatever the kind of data it is, build the right pipelines, and then the possibilities of transformation are just endless. >> Yeah, that's very nice how you worked in some of the customer examples and I appreciate that. I want to ask you though that some people might say that putting in that layer while you clearly add simplification and is I think a great thing that there begins over time to be a gap, if you will, between the ability of that layer to integrate all the primitives and all the piece parts, and that you lose some of that fine grain control and it slows you down. What would you say to that? >> Look, I think that's what we excel at and that's what we completely sort of buy into. And it's our job to basically provide that level of integration and that granularity in the way that it's an art. I absolutely admit it's an art. There are areas where people crave simplicity and not a lot of sort of knobs and dials and things like that. But there are areas where customers want flexibility. And so I think just to give you an example of both of them, in landing the data, in consistency in building pipelines, they want simplicity. They don't want complexity. They don't want 50 different places to do this. There's one way to do it. When it comes to computing and reducing this data, analyzing this data, they want flexibility. This is one of the reasons why we say, "Hey, listen you want to use Databricks. "If you're buying into that proposition. "And you're absolutely happy with them, "you can plug it into it." You want to use BI and essentially do a small data model, you can use BI. If you say that, "Look, I've landed into the lake, "I really only want to use ML." Bring in your ML models and party on. So that's where the flexibility comes in. So that's sort of that we sort of think about it. >> Well, I like the strategy because one of our guests, Jumark Dehghani is I think one of the foremost thinkers on this notion of of the data mesh And her premise is that the data builders, data product and service builders are frustrated because the big data system is generic to context. There's no context in there. But by having context in the big data architecture and system you can get products to market much, much, much faster. So, and that seems to be your philosophy but I'm going to jump ahead to my ecosystem question. You've mentioned Databricks a couple of times. There's another partner that you have, which is Snowflake. They're kind of trying to build out their own DataCloud, if you will and GlobalMesh, and the one hand they're a partner on the other hand they're a competitor. How do you sort of balance and square that circle? >> Look, when I see Snowflake, I actually see a partner. When we see essentially we are when you think about Azure now this is where I sort of step back and look at Azure as a whole. And in Azure as a whole, companies like Snowflake are vital in our ecosystem. I mean, there are places we compete, but effectively by helping them build the best Snowflake service on Azure, we essentially are able to differentiate and offer a differentiated value proposition compared to say a Google or an AWS. In fact, that's been our approach with Databricks as well. Where they are effectively on multiple clouds and our opportunity with Databricks is to essentially integrate them in a way where we offer the best experience the best integrations on Azure Berna. That's always been our focus. >> Yeah, it's hard to argue with the strategy or data with our data partner and ETR shows Microsoft is both pervasive and impressively having a lot of momentum spending velocity within the budget cycles. I want to come back to AI a little bit. It's obviously one of the fastest growing areas in our survey data. As I said, clearly Microsoft is a leader in this space. What's your vision of the future of machine intelligence and how Microsoft will participate in that opportunity? >> Yeah, so fundamentally, we've built on decades of research around essentially vision, speech and language. That's been the three core building blocks and for a really focused period of time, we focused on essentially ensuring human parity. So if you ever wonder what the keys to the kingdom are, it's the boat we built in ensuring that the research or posture that we've taken there. What we've then done is essentially a couple of things. We've focused on essentially looking at the spectrum that is AI. Both from saying that, "Hey, listen, "it's got to work for data analysts." We're looking to basically use machine learning techniques to developers who are essentially, coding and building machine learning models from scratch. So for that select proposition manifest to us as really AI focused on all skill levels. The other core thing we've done is that we've also said, "Look, it'll only work as long "as people trust their data "and they can trust their AI models." So there's a tremendous body of work and research we do and things like responsible AI. So if you asked me where we sort of push on is fundamentally to make sure that we never lose sight of the fact that the spectrum of AI can sort of come together for any skill level. And we keep that responsible AI proposition absolutely strong. Now against that canvas Dave, I'll also tell you that as Edge devices get way more capable, where they can input on the Edge, say a camera or a mic or something like that. You will see us pushing a lot more of that capability onto the edge as well. But to me, that's sort of a modality but the core really is all skill levels and that responsibility in AI. >> Yeah, so that brings me to this notion of, I want to bring an Edge and hybrid cloud, understand how you're thinking about hybrid cloud, multicloud obviously one of your competitors Amazon won't even say the word multicloud. You guys have a different approach there but what's the strategy with regard to hybrid? Do you see the cloud, you're bringing Azure to the edge maybe you could talk about that and talk about how you're different from the competition. >> Yeah, I think in the Edge from an Edge and I even I'll be the first one to say that the word Edge itself is conflated. Okay, a little bit it's but I will tell you just focusing on hybrid, this is one of the places where, I would say 2020 if I were to look back from a COVID perspective in particular, it has been the most informative. Because we absolutely saw customers digitizing, moving to the cloud. And we really saw hybrid in action. 2020 was the year that hybrid sort of really became real from a cloud computing perspective. And an example of this is we understood that it's not all or nothing. So sometimes customers want Azure consistency in their data centers. This is where things like Azure Stack comes in. Sometimes they basically come to us and say, "We want the flexibility of adopting "flexible button of platforms let's say containers, "orchestrating Kubernetes "so that we can essentially deploy it wherever you want." And so when we designed things like Arc, it was built for that flexibility in mind. So, here's the beauty of what something like Arc can do for you. If you have a Kubernetes endpoint anywhere, we can deploy an Azure service onto it. That is the promise. Which means, if for some reason the customer says that, "Hey, I've got "this Kubernetes endpoint in AWS. And I love Azure SQL. You will be able to run Azure SQL inside AWS. There's nothing that stops you from doing it. So inherently, remember our first principle is always to meet our customers where they are. So from that perspective, multicloud is here to stay. We are never going to be the people that says, "I'm sorry." We will never say (speaks indistinctly) multicloud but it is a reality for our customers. >> So I wonder if we could close, thank you for that. By looking back and then ahead and I want to put forth, maybe it's a criticism, but maybe not. Maybe it's an art of Microsoft. But first, you did Microsoft an incredible job at transitioning its business. Azure is omnipresent, as we said our data shows that. So two-part question first, Microsoft got there by investing in the cloud, really changing its mindset, I think and leveraging its huge software estate and customer base to put Azure at the center of it's strategy. And many have said, me included, that you got there by creating products that are good enough. We do a one Datto, it's still not that great, then a two Datto and maybe not the best, but acceptable for your customers. And that's allowed you to grow very rapidly expand your market. How do you respond to that? Is that a fair comment? Are you more than good enough? I wonder if you could share your thoughts. >> Dave, you hurt my feelings with that question. >> Don't hate me JG. (both laugh) We're getting it out there all right, so. >> First of all, thank you for asking me that. I am absolutely the biggest cheerleader you'll find at Microsoft. I absolutely believe that I represent the work of almost 9,000 engineers. And we wake up every day worrying about our customer and worrying about the customer condition and to absolutely make sure we deliver the best in the first attempt that we do. So when you take the plethora of products we deliver in Azure, be it Azure SQL, be it Azure Cosmos DB, Synapse, Azure Databricks, which we did in partnership with Databricks, Azure Machine Learning. And recently when we premiered, we sort of offered the world's first comprehensive data governance solution in Azure Purview. I would humbly submit it to you that we are leading the way and we're essentially showing how the future of data, AI and the Edge should work in the cloud. >> Yeah, I'd be disappointed if you capitulated in any way, JG. So, thank you for that. And that's kind of last question is looking forward and how you're thinking about the future of cloud. Last decade, a lot about cloud migration, simplifying infrastructure to management and deployment. SaaSifying My Enterprise, a lot of simplification and cost savings and of course redeployment of resources toward digital transformation, other valuable activities. How do you think this coming decade will be defined? Will it be sort of more of the same or is there something else out there? >> I think that the coming decade will be one where customers start to unlock outsize value out of this. What happened to the last decade where people laid the foundation? And people essentially looked at the world and said, "Look, we've got to make a move. "They're largely hybrid, but you're going to start making "steps to basically digitize and modernize our platforms. I will tell you that with the amount of data that people are moving to the cloud, just as an example, you're going to see use of analytics, AI or business outcomes explode. You're also going to see a huge sort of focus on things like governance. People need to know where the data is, what the data catalog continues, how to govern it, how to trust this data and given all of the privacy and compliance regulations out there essentially their compliance posture. So I think the unlocking of outcomes versus simply, Hey, I've saved money. Second, really putting this comprehensive sort of governance regime in place and then finally security and trust. It's going to be more paramount than ever before. >> Yeah, nobody's going to use the data if they don't trust it, I'm glad you brought up security. It's a topic that is at number one on the CIO list. JG, great conversation. Obviously the strategy is working and thanks so much for participating in Cube on Cloud. >> Thank you, thank you, Dave and I appreciate it and thank you to everybody who's tuning into today. >> All right then keep it right there, I'll be back with our next guest right after this short break.

Published Date : Jan 5 2021

SUMMARY :

of one of the leaders in the field, to be here with you that the new innovation cocktail comprises and the use of data in interesting ways. and how you see the future that you have the best is that you got the single that once you land data, but to get there you got to go in the way that you like. Yeah, you can think of it that way. of and I wonder if you buy into it. and the value proposition and that you lose some of And so I think just to give you an example So, and that seems to be your philosophy when you think about Azure Yeah, it's hard to argue the keys to the kingdom are, Do you see the cloud, you're and I even I'll be the first one to say that you got there by creating products Dave, you hurt my We're getting it out there all right, so. that I represent the work Will it be sort of more of the same and given all of the privacy the data if they don't trust it, thank you to everybody I'll be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JGPERSON

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

FedExORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jumark DehghaniPERSON

0.99+

DatabricksORGANIZATION

0.99+

AWSORGANIZATION

0.99+

JG ChirapurathPERSON

0.99+

firstQUANTITY

0.99+

50 different servicesQUANTITY

0.99+

bothQUANTITY

0.99+

OracleORGANIZATION

0.99+

50 different placesQUANTITY

0.99+

MySQLTITLE

0.99+

oneQUANTITY

0.99+

GlobalMeshORGANIZATION

0.99+

BothQUANTITY

0.99+

first attemptQUANTITY

0.99+

SecondQUANTITY

0.99+

Last decadeDATE

0.99+

threeQUANTITY

0.99+

three factorsQUANTITY

0.99+

SynapseORGANIZATION

0.99+

one wayQUANTITY

0.99+

COVIDOTHER

0.99+

OneQUANTITY

0.98+

first oneQUANTITY

0.98+

first principleQUANTITY

0.98+

todayDATE

0.98+

Azure StackTITLE

0.98+

Azure SQLTITLE

0.98+

SparkTITLE

0.98+

FirstQUANTITY

0.98+

MongoDBTITLE

0.98+

2020DATE

0.98+

about 200 radiosQUANTITY

0.98+

MoorePERSON

0.97+

PostgreSQLTITLE

0.97+

four decadesQUANTITY

0.97+

ArcTITLE

0.97+

singleQUANTITY

0.96+

SnowflakeORGANIZATION

0.96+

last decadeDATE

0.96+

Azure PurviewTITLE

0.95+

3 million packages a dayQUANTITY

0.95+

One wayQUANTITY

0.94+

three coreQUANTITY

0.94+

Sagar Kadakia | CUBE Conversation, December 2020


 

>> From The Cube Studios in Palo Alto and Boston connecting with thought-leaders all around the world, this is a Cube Conversation. >> Hello, everyone, and welcome to this Cube Conversation, I'm Dave Vellante. Now, you know I love data, and today we're going to introduce you to a new data and analytical platform, and we're going to take it to the world of cloud database and data warehouses. And with me is Sagar Kadakia who's the head of Enterprise IT (indistinct) 7Park Data. Sagar, welcome back to the Cube. Good to see you. >> Thank you so much, David. I appreciate you having me back on. >> Hey, so new gig for you, how's it going? Tell us about 7Park Data. >> Yeah. Look, things are going well. It started at about two months ago, just a, you know, busy. I had a chance last, you know a few months to kind of really dig into the dataset. We have a tremendous amount of research coming out in Q4 Q1 around kind of the public cloud database market public cloud analytics market. So, you know, really looking forward to that. >> Okay, good. Well, let's bring up the first slide. Let's talk about where this data comes from. Tell us a little bit more about the platform. Where's the insight. >> Yeah, absolutely. So I'll talk a little about 7Park and then we'd kind of jump into the data a little bit. So 7Park was founded in 2012 in terms of differentiator, you know with other alternative data firms, you know we use NLP machine learning, you know AI to really kind of, you know, structure like noisy and unstructured data sets really kind of generate insight from that. And so, because a lot of that know how we ended up being acquired by Vista back in 2018. And really like for us, you know the mandate there is to really, you know look across all their different portfolio companies and try to generate insight from all the data assets you know, that these portfolio companies have. So, you know, today we're going to be talking about you know, one of the data sets from those companies it's that cloud infrastructure data set. We get it from one of the portfolio companies that you know, helps organizations kind of manage and optimize their cloud spend. It's real time data. We essentially get this aggregated daily. So this certainly different than, you know your traditional providers maybe giving you quarterly or kind of by annual data. This is incredibly granular, real time all the way down to the invoice level. So within this cloud infrastructure dataset we're tracking several billion dollars worth of spend across AWS, Azure and GCP. Something like 350 services across like 20 plus markets. So, you know, security machine learning analytics database which we're going to talk about today. And again like the granularity of the KPIs I think is kind of really what kind of you know, differentiates this dataset you know, with just within database itself, you know we're tracking over 20 services. So, you know, lots to kind of look forward to kind of into Q4 and Q1. >> So, okay. So the main spring of your data is if I'm a customer and I there's a service out there there are many services like this that can help me optimize my spend and the way they do that is I basically connect their APIs. So they have visibility on what the transactions that I'm making my usage statistics et cetera. And then you take that and then extrapolate that and report on that. Is that right? >> Exactly. Yeah. We're seeing just on this one data set that we're going to talk about today, it's something like six 700 million rows worth of data. And so kind of what we do is, you know we kind of have the insight layer on top of that or the analytics layer on top of all that unstructured data, so that we can get a feel for, you know a whole host of different kind of KPIs spend, adoption rates, market share, you know product size, retention rates, spend, you know, net price all that type of stuff. So, yeah, that's exactly what we're doing. >> Love it, there's more transparency the better. Okay. So, so right, because this whole world of market sizing has been very opaque you know, over the years, and it's like you know, backroom conversations, whether it's IDC, Gartner who's got what don't take, you know and the estimations and it's very, very, you know it's not very transparent so I'm excited to see what you guys have. Okay. So, so you have some data on the public cloud and specifically the database market that you want to share with our audience. Let's bring up the next graphic here. What are we looking at here Sagar? What are these blue lines and red lines what's this all about? >> Yeah. So and look, we can kind of start at the kind of the 10,000 foot view kind of level here. And so what we're looking at here is our estimates for the entire kind of cloud database market, including data warehousing. If you look all the way over to the right I'll kind of explain some of these bars in a minute but just high level, you know we're forecasting for this year, $11.8 billion. Now something to kind of remember about that is that's just AWS, Azure and GCP, right? So that's not the entire cloud database market. It's just specific to those three providers. What you're looking at here is the breakout and blue and purple is SQL databases and then no SQL databases. And so, you know, to no one's surprise here and you can see, you know SQL database is obviously much larger from a revenue standpoint. And so you can see just from this time last year, you know the database market has grown 40% among these three cloud providers. And, you know, though, we're not showing it here, you know from like a PI perspective, you know database is playing a larger and larger role for all three of these providers. And so obviously this is a really hot market, which is why, you know we're kind of discussing a lot of the dynamics. You don't need to Q and Q Q4 and Q1 >> So, okay. Let's get into some of the specific firm-level data. You have numbers that you want to share on Amazon Redshift and Google BigQuery, and some comments on Snowflake let's bring up the next graphic. So tell us, it says public cloud data, warehousing growth tempered by Snowflake, what's the data showing. And let's talk about some of the implications there. >> Yeah, no problem. So yeah, this is kind of one of the markets, you know that we kind of did a deep dive in tomorrow and we'll kind of get this, you know, get to this in a few minutes, we're kind of doing a big CIO panel kind of covering data, warehousing, RDBMS documents store key value, graph all these different database markets but I thought it'd be great, you know just cause obviously what's occurring here and with snowflake to kind of talk about, you know the data warehousing market, you know, look if you look here, these are some of the KPIs that we have you know, and I'll kind of start from the left. Here are some of the orange bars, the darker orange bars. Those are our estimates for AWS Redshift. And so you can see here, you know we're projecting about 667 million in revenue for Redshift. But if you look at the lighter arm bars, you can see that the service went from representing about 2% of you know, AWS revenue to about 1.5%. And we think some of that is because of Snowflake. And if we kind of, take a look at some of these KPIs you know, below those bar charts here, you know one of the things that we've been looking at is, you know how are longer-term customer spending and how are let's just say like newer customers spending, so to speak. So kind of just like organic growth or kind of net expansion analysis. And if you look at on the bottom there, you'll see, you know customers in our dataset that we looked at, you know that were there 3Q20 as well as 3Q19 their spend on AWS Redshift is 23%. Right? And then look at the bifurcation, right? When we include essentially all the new customers that onboard it, right after 3Q19, look at how much they're bringing down the spend increase. And it's because, you know a lot of spend that was perhaps meant for Redshift is now going to Snowflake. And look, you would expect longer-term customers to spend more than newer customers. But really what we're doing is here is really highlighting the stark contrast because you have kind of back to back KPIs here, you know between organic spend versus total spend and obviously the deceleration in market share kind of coming down. So, you know, something that's interesting here and we'll kind of continue tracking that. >> Okay. So let's maybe come back to this mass Colombo questions here. So the start with the orange side. So we're talking about Snowflake being 667 million. These are your estimates extrapolated based on what we talked about earlier, 1.5% of the AWS portfolio of course you see things like, they continue to grow. Amazon made a bunch of storage announcements last week at the first week of re-invent (indistinct) I mean just name all kinds of databases. And so it's competing with a lot of other services in the portfolio and then, but it's interesting to see Google BigQuery a much larger percentage of the portfolio, which again to me, makes sense people like BigQuery. They like the data science components that are built in the machine learning components that are built in. But then if you look at Snowflake's last quarter and just on a run rate basis, it's over there over $600 million. Now, if you just multiply their last quarter by four from a revenue standpoint. So they got Redshift in their sites, you know if this is, you know to the extent this is the correct number and I know it's an estimate but I haven't seen any better numbers out there. Interesting Sagar, I mean Snowflake surpassed the value of snowflakes or past service now last Friday, it's probably just in trading today you know, on Monday it's maybe Snowflake is about a billion dollars less than the in value than IBM. So you're saying snowflake in a lot of attention, post IPO the thing is even exploded more. I mean, it's crazy. And I presume that's rippled into the customer interest areas. Now the ironic thing here of course, is that that snowflake most of its revenue comes from AWS running on AWS at the same time, AWS and or Redshift and snowflake compete. So you have this interesting dynamic going on. >> Yeah. You know, we've spoken to so many CIOs about kind of the dynamics here with Redshift and BigQuery and Snowflake, you know as it kind of pertains to, you know, Redshift and Snowflake. I think, you know, what I've heard the most is, look if you're using Redshift, you're going to keep using it. But if you're new to data warehousing kind of, so to speak you're going to move to Snowflake, or you're going to start with Snowflake, you know, that and I think, you know when it comes to data warehousing, you're seeing a lot of decisions kind of coming from, you know, bottom up now. So a lot of developers and so obviously their preference is going to be Snowflake. And then when you kind of look at BigQuery here over to the right again, like look you're seeing revenue growth, but again, as a as a percentage of total, you know, GCP revenue you're seeing it come down and look, we don't show it here. But another dynamic that we're seeing amongst BigQuery is that we are seeing adoption rates fall versus this time last year. So we think, again, that could be because of Snowflake. Now, one thing to kind of highlight here with BigQuery look it's kind of the low cost alternative, you know, so to speak, you know once Redshift gets too expensive, so to speak, you know you kind of move over to, to BigQuery and we kind of put some price KPIs down here all the way at the bottom of the chart, you know kind of for both of them, you know when you kind of think about the net price per kind of TB scan, you know, Redshift does it pro rate right? It's five bucks or whatever you, you know whatever you scan in, whereas, you know GCP and get the first terabyte for free. And then everything is prorated after that. And so you can see the net price, right? So that's the price that people actually pay. You can see it's significantly lower that than Redshift. And again, you know it's a lower cost alternative. And so when you think about, you know organizations or CIO's that want to save some money certainly BigQuery, you know, is an option. But certainly I think just overall, you know, Snowflake is is certainly having, you know, an impact here and you can see it from, you know the percentage of total revenue for both these coming down. You know, if we look at other AWS database services or you mentioned a few other services, you know we're not seeing that trend, we're seeing, you know percentage of total revenue hang in or accelerate. And so that's kind of why we want to point this out as this is something unique, you know for AWS and GCP where even though you're seeing growth, it's decelerating. And then of course you can kind of see the percentage of revenue represents coming down. >> I think it's interesting to look at these two companies and then of course Snowflake. So if you think about Snowflake and BigQuery both of those started in the cloud they were true born in the cloud databases. Whereas Redshift was a deal that Amazon did, you know with parxl back in the day, one time license fee and then they re-engineered it to be kind of cloud based. And so there is some of that historical o6n-prem baggage in there. I know that AWS did a tremendous job in rearchitecting that but nonetheless, so I'll give you a couple of examples. If you go back to last year's reinvent 2019 of course Snowflake was really the first to popularize this idea of separating compute from storage and even compute from compute, which is kind of nuance. So I won't go into that, but the idea being you can dial up or dial down compute as you need it you can even turn off compute in the world of Snowflake and just, you know, you're paying an S3 for storage charges. What Amazon did last reinvent was they announced the separation of compute and storage, but what the way they did it was they did it with a tiering architecture. So you can't ever actually fully turn off the compute, but it's great. I mean, it's customers I've talked to say, yes I'm saving a lot of money, you know, with this approach. But again, there's these little nuances. So what Snowflake announced this year was their data cloud and what the data cloud is as a whole new architecture. It's based on this global mesh. It lives across both AWS and Azure and GCP. And what Snowflake has done is they've taken they've abstracted the complexity of the clouds. So you don't even necessarily have to know what you're running on. You have to worry about it any Snowflake user inside of that data cloud if given access can share data with any other user. So it's a very powerful concept that they're doing. AWS at reinvent this year announced something called AWS glue elastic views which basically allows you to take data across their entire database portfolio. And I'm going to put, share in quotes. And I put it in quotes because it's essentially doing copying from a source pushing to a target AWS database and then doing a change data management capture and pushes that over time. So it, it feels like kind of an attempt to do their own data cloud. The advantages of AWS is that they've got way more data stores than just Snowflake cause it's one data store. So was AWS says Aurora dynamo DB Redshift on and on and on streaming databases, et cetera where Snowflake is just Snowflake. And so it's going to be interesting to see, you know these two juxtaposing philosophies but I want it to sort of lay that out because this is just it's setting up as a really interesting dynamic. Then you can bring in Azure as well with Microsoft and what they're doing. And I think this is going to be really fascinating to see how this plays out over the next decade. >> Yeah. I think some of the points you brought up maybe a little bit earlier were just around like the functional limits of a Redshift. Right. And I think that's where, you know Snowflake obviously does it does very, very well you know, you kind of have these, you know kind of to come, you know, you kind of have these, you know if you kind of think about like the market drivers right? Like, let's think about even like the prior slide that we showed, where we saw overall you know, database growth, like what's driving all of that what's driving Redshift, right. Obviously proximity application, interdependencies, right. Costs. You get all the credits or people are already working with the big three providers. And so there's so many reasons to continue spending with them, obviously, you know, COVID-19 right. Obviously all these apps being developed right in the cloud versus data centers and things of that nature. So you have all of these market drivers, you know for the cloud database services for Redshift. And so from that perspective, you know you kind of think, well why are people even to go to a third party vendor? And I think, you know, at that point it has to be the functional superiority. And so again, like a lot of times it depends on, you know, where decisions are coming from you know, top down or bottom up obviously at the engineering at the developer level they're going to want better functionality. Maybe, you know, top-down sometimes, you know it's like, look, we have a lot of credits, you know we're trying to save money, you know from a security perspective it could just be easier to spin something up you know, in AWS, so to speak. So, yeah, I think these are all the dynamics that, you know organizations have to figure out every day, but at least within the data warehousing space, you are seeing spend go towards Snowflake and it's going away to an extent as we kind of see, you know growth decelerate for both of these vendors, right. It's not that revenue's not going out there is growth which is that growth is, it's just not the same as it used to be, you know, so to speak. So yeah, this is a interesting area to kind of watch and I think across all the other markets as well, you know when you think about document store, right you have AWS document DB, right. What are the impacts there with with Mongo and some of these other kind of third party data warehousing vendors, right. Having to compete with all the, you know all the different services offered by AWS Azure like the cosmos and all that stuff. So, yeah, it's definitely kind of turning into a battle Royal, you know as we kind of head into, into 2021. And so I think having all these KPIs is really helping us kind of break down and figure out, you know which areas like data warehousing are slowing down. But then what other areas in database where they're seeing a tremendous amount of acceleration, like as we said, database revenue is driving. Like it's becoming a bigger part of their overall revenue. And so they are doing well. It just, you know, there's obviously snowflake they have to compete with here. >> Well, and I think maybe to your point I infer from your point, it's not necessarily a zero sum game. And as I was discussing before, I think Snowflake's really trying to create a new market. It's not just trying to steal share from the Terra datas and the Redshifts and the PCPs of the world, big queries and and Azure SQL server and Oracle and so forth. They're trying to create a whole new concept called the data cloud, which to me is really important because my prediction is what Snowflake is doing. And they don't even really talk a ton about this but they sort of do, if you squint through the lines I think what they're doing is first of all, simplicity is there, what they're doing. And then they're putting data in the hands of business people, business line people who have domain context, that's a whole new way of thinking about a data architecture versus the prevalent way to do a data pipeline is you got data engineers and data scientists, and you ingest data. It's goes to the beginning of the pipeline and that's kind of a traditional way to do it. And kind of how I think most of the AWS customers do it. I think over time, because of the simplicity of Snowflake you're going to see people begin to look at new ways to architect data. Anyway, we're almost out of time here but I want to bring up the next slide which is a graphic, which talks about a database discussion that you guys are having on 12/8 at 2:00 PM Eastern time with Bain and Verizon who what's this all about. >> Yeah. So, you know, one of the things we wanted to do is we kind of kick off a lot of the, you know Q4 Q1 research or putting on the database spark. It is just like kind of, we did, you know we did today, which obviously, you know we're really going to expand on tomorrow at a at 2:00 PM is discuss all the different KPIs. You know, we track something like 20 plus database services. So we're going to be going through a lot more than just kind of Redshift and BigQuery. Look at all the dynamics there, look at, you know how they're very against some of the third party vendors like the Snowflake, like a Mongo DB, as an example we got some really great, you know, thought leaders you know, Michael Delzer and Praveen from verizon they're going to kind of help, or they're going to opine on all the dynamics that we're seeing. And so it's going to be a very kind of, you know structured wise, it's going to be very quantitative but then you're going to have this beautiful qualitative discussion to kind of help support a lot of the data points that we're capturing. And so, yeah, we're really excited about the panel you know, from, you know, why you should join standpoint. Look, it's just, it's great, competitive Intel. If you're a third party, you know, database, data warehousing vendor, this is the type of information that you're going to want to know, you know, adoption rates market sizing, retention rates, you know net price reservers, on demand dynamics. You know, we're going through a lot that tomorrow. So I'm really excited about that. I'm just in general, really excited about a lot of the research that we're kind of putting out. So >> That's interesting. I mean, and we were talking earlier about AWS glue elastic views. I'd love to see your view of all the database services from Amazon. Cause that's where it's really designed to do is leverage those across those. And you know, you listen to Andrew, Jesse talk they've got a completely different philosophy than say Oracle, which says, Hey we've got one database to do all things Amazon saying we need that fine granularity. So it's going to be again. And to the extent that you're providing market context they're very excited to see that data Sagar and see how that evolves over time. Really appreciate you coming back in the cube and look forward to working with you. >> Appreciate Dave. Thank you so much. >> All right. Welcome. Thank you everybody for watching. This is Dave Vellante for the cube. We'll see you next time. (upbeat music)

Published Date : Dec 21 2020

SUMMARY :

all around the world, and today we're going to introduce you I appreciate you having me back on. Hey, so new gig for I had a chance last, you know more about the platform. the mandate there is to really, you know And then you take that so that we can get a feel for, you know and it's like you know, And so, you know, to You have numbers that you want one of the markets, you know if this is, you know of the chart, you know interesting to see, you know kind of to come, you know, you and you ingest data. It is just like kind of, we did, you know And you know, you listen Thank you so much. Thank you everybody for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AndrewPERSON

0.99+

AmazonORGANIZATION

0.99+

2012DATE

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

7ParkORGANIZATION

0.99+

MondayDATE

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

40%QUANTITY

0.99+

DavePERSON

0.99+

$11.8 billionQUANTITY

0.99+

2018DATE

0.99+

JessePERSON

0.99+

VerizonORGANIZATION

0.99+

December 2020DATE

0.99+

23%QUANTITY

0.99+

five bucksQUANTITY

0.99+

Sagar KadakiaPERSON

0.99+

SagarPERSON

0.99+

1.5%QUANTITY

0.99+

10,000 footQUANTITY

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

7Park DataORGANIZATION

0.99+

verizonORGANIZATION

0.99+

last yearDATE

0.99+

last weekDATE

0.99+

last quarterDATE

0.99+

todayDATE

0.99+

two companiesQUANTITY

0.99+

350 servicesQUANTITY

0.99+

bothQUANTITY

0.99+

2021DATE

0.99+

GartnerORGANIZATION

0.99+

over $600 millionQUANTITY

0.99+

last FridayDATE

0.99+

BainORGANIZATION

0.99+

OracleORGANIZATION

0.99+

first slideQUANTITY

0.99+

667 millionQUANTITY

0.99+

PraveenPERSON

0.99+

CubeORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

three providersQUANTITY

0.98+

tomorrowDATE

0.98+

about 2%QUANTITY

0.98+

RedshiftTITLE

0.98+

about 1.5%QUANTITY

0.98+

20 plus marketsQUANTITY

0.98+

six 700 million rowsQUANTITY

0.98+

first terabyteQUANTITY

0.98+

Michael DelzerPERSON

0.98+

2:00 PMDATE

0.98+

SnowflakeTITLE

0.98+

twoQUANTITY

0.98+

firstQUANTITY

0.98+

threeQUANTITY

0.98+

last yearDATE

0.98+

Greg Altman, Swiff-Train Company & Puneet Dhawan, Dell EMC | Dell Technologies World 2020


 

>> Narrator: From around the globe, it's theCUBE, with digital coverage of Dell Technologies World. Digital Experience brought to you by Dell Technologies. >> Welcome to theCUBE's coverage of Dell Technologies World 2020, the Digital Experience. I am Lisa Martin and I've got a couple of guests joining me. Please welcome Puneet Dhawan, the Director of Product Management, Hyper-converged infrastructure for Dell Technologies. Puneet great to see you today. >> Thank you, for having me over. >> And we've got a customer that's going to be articulating all the value that Puneet's going to talk about. Please welcome Greg Altman, the IT infrastructure manager from Swiff-Train. Hey, Greg, how are you today? >> I'm doing well. Thank you. >> Excellent. All right guys. So Puneet, let's start with you, give us a little bit of an overview of your role. You lead product management, for Dell Technologies partner aligned HCI systems. Talk to us about that? >> Sure, absolutely. Um so, you know, it's largely about providing customers the choice. My team specifically focuses on developing Hyper-converged infrastructure products for our customers that are aligned to key technologies from our partners, such as Microsoft, Nutanix, et cetera. And that, you know, falls very nicely with meeting our customers on what technology they want to pick on, what technology they want to go with, whether it's VMware, Microsoft, Nutanix, we have to source from the customers. >> Let's dig into Microsoft. Talk to us about Azure Stack HCI. How is Dell Tech working with them to position this in the market? >> Sure, um, this is largely about following the customer journey towards digital transformation. So both in terms of where they are in digital transformation and how they want to approach it. So for example, we have a large customer base who's looking to modernize their legacy Hyper-V architectures, and that's where Azure Stack HCI fits in very nicely, and not only our customers are able to modernize the legacy architectures using the architectural benefits of simplicity, high performance, simple management, scalability. (Greg breathes heavily) For HCI for Hyper-V, at the same time, they can connect to Azure to get the benefits of the bullet's force. Now on the other end, we have a large customer base who started off in Azure, you know, they have cloud native applications, you know, kind of born in the cloud. But they're also looking to bring some of the applications down to on-prem, or things like disconnected scenarios, regulatory concerns, data locality reasons. And for those customers, Microsoft and Dell have a department around Dell EMC Integrated solutions for Azure Stack Hub. And that's what essentially brings Azure ecosystem, on-prem so it's like running cloud in your own premises. >> So you mentioned a second ago giving customers choice, and we always talk about that at pretty much every event that we do. So tell me a little bit about how the long standing partnership that Dell Technologies has with Microsoft decades. How is that helping you to really differentiate the technology and then show the customers the different options, together these two companies can deliver? >> Sure, so we've had a very long standing partnerships, actually over three decades now. Across the spectrum whether we talk about our partnership more on the Windows 10 side, and the modernization of the workforce, to the level of hybrid cloud and cloud solutions, and helping even customers, you know, run their applications on Azure to our large services offerings. Over the past several years, we have realized how important is hybrid cloud and multicloud for customers. And that's where we have taken our partnership to the next level, to co-develop, co-engineer and bring to the market together our full portfolio of Azure Stack Hybrid Solutions. And that's where I've said, meeting customers on where they are either bringing Azure on-prem, or helping customers on-prem, modernize on-prem architectures using Azure Stack HCI. So, you know, there's a whole lot of core development we have done together to simplify how customers manage on-prem infrastructures on a day-to-day basis, how do they install it, even how they support it, you know, we have joined support agreements with Microsoft that encompassed and bearing the entirety of the portfolio so that customers have one place to go, which is Dell Technologies to get not only the product, either in US or worldwide, to a very secure supply chain to Dell EMC, at the same time for all their support consulting services, whether they're on-prem or in the cloud. We offer all those services in very close partnership with Microsoft. >> Terrific. Great. Let's switch over to you now, probably we talk about what Swiff-Train is doing with its Azure Stack HCI, tell our audience a little bit about Swiff-Train what you guys are what you do. >> Well, Swiff-Train is a full covering flooring wholesaler, we sell flooring across Texas, Oklahoma, Louisiana, Arkansas, even into Florida. And we're an 80 year old company, 80 plus. And we've been moving forward with kind of hybridizing our infrastructure, making use of cloud where it makes sense. And when it came to our on-prem infrastructure, it was old, well five, six years old, running Windows 2012 2016, it was time to upgrade. And when we look at doing a large scale upgrade, like that, we called Dell and say, you know, this is what we're trying to do, and what's the new technologies that we can do that makes the migration work easier. And that's where we wound up with Azure Stack. >> So from a modernization perspective, you mentioned 80 plus year old company, I was looking on the website 1937. I always like to talk to companies like that, because modernizing when you've been around for that long it's challenging, it's challenging culturally , it's challenging historically, But talk to us a little bit about some of the specifics, that you guys were looking to Dell and Microsoft to help modernize. And was this really to drive things like, you know, operational simplicity, allow the business to have more agility so that it can expand in some of those other cities, like we talked about? >> Absolutely. We were dealing with a long maintenance window five or six hours every week patching, updating. Since we moved to Azure Stack HCI, we have virtually zero downtime. That allows our night shifts or weekend crews to be able to keep working. And the system is just bulletproof. It just does not go down. And with the lifecycle management tools that we get with Windows Admin Center, and Dell's OpenManage Plug-in, I log into one pane of glass in the morning, and I look and I say, "Hey, all my servers are going great. Everything's in the green." I know that that day, I'm not going to have any infrastructure issues, I can deal with other issues that make the business money. >> And I'm sure they appreciate that. Tell us a little bit about the the actual implementation and the support as, as Puneet talked about all of the core development, the joint support that these two powerhouses deliver. Tell us about that implementation. And then for your day to day, what's your interaction with Dell and or Microsoft like? >> Well, for the implementation, we worked with our Dell representative. And we came up with a sizing plan. This is what we needed to do, we had eight or nine physical servers that we wanted to get rid of. And we wanted to compress down. Now we're definitely went from eight or nine to you servers down to three rack units of space with an edge, including the extra switches and stuff that we had to do. So I mean we were able to get rid of a lot of storage space or rack space. And as far as the implementation was really easy. Dell literally has a book, you follow the book and it's that simple. (Puneet chuckles) >> I like that I think more of us these days, can you somewhat write a book that we can just follow? That would be fantastic. One more question, Greg for you, before we go back to Puneet. As Puneet talked about in the beginning from describing his role, that you know, Dell Technologies works with a lot of other vendors. Why Azure Stack HCI for Swiff-Train? >> Well, it made sense for us. We were already moving, several of our websites were already moved to Azure, we've been a Hyper-V user for many years. So it was just kind of a natural evolution to migrate in that direction, because it kind of pulls all of our management tools into one, well you know, a one pane of glass type of scenario. >> Excellent. All right Puneet back to you. With some of the things that you talked about before and that Greg sort of articulated about simplifying day-to-day. Greg, I saw in my notes that you had this old aging infrastructure, you were spending five hours a week patching maintain, that you say is now virtually eliminated, Puneet, Dell Technologies and Microsoft had done quite a bit of work to simplify the operational experience. Talk to us about that, and what are some of the measurable improvements that you guys have made? >> Sure. It all starts with neither on how we approach the problem, and we have always taken a very product-centric approach at Azure Stack HCI. You know, unlike, some of our competition, which had followed. There is a reference architecture, you can put Windows Server 2019 on it and go run your own servers, and the Hyper-converged Stack on it, but we have followed a very different approach where we have learned quite a lot, you know, we are the number one vendor in HCI space, and we know a thing or two about HCI and what customers really need there. So that's why from the very beginning, we have taken a product-centric approach, and doing that allows us to have product type offers in terms of our Kx notes that are specifically designed and built for Azure Stack HCI. And on top of that, we have done very specific integration to the management Stack, we've been doing Admin Center, that is the new management tool for Microsoft to manage, both on-prem, Hyper-converged infrastructure, your Windows servers, as well as any VM's that you're running on Azure, to provide customers a very seamless, you know, a single pane of glass for both the on-prem as well as infrastructure on public cloud services. And in doing that, our customers have really appreciated how simple it is to keep their clusters running, to reduce the maintenance windows, based on some of our internal testing that we have done. IT administrators can reduce the time they spend on maintaining the clusters by over 90%. Over 40% reduction in the maintenance window itself. And all that leads to your clusters running in a healthy state. So you don't have to worry about pulling the right drivers, right founder from 10 different places, making sure whether they are qualified or not when running together, we provide one single pane of glass that customers can click on, and you know, see whether their questions are compliant or not, and if yes go update. And all this has been possible by a joint engineering with Microsoft. >> Can you just describe the difference between an all in one validated HCI solution, which is what you're delivering, versus competitors that are only delivering a reference architecture? >> Absolutely. So if you're running just a reference architecture, you are running an operating system, systems Stack on a server, we know that when it comes to running HCI, that means running also business critical applications on a clustered environment. You need to make sure that all the hardware, the drivers, the founder, the hard drives, the memory configuration, the network configurations, all that can be very complex very easily. And if you have reference architectures, there is no way to know, but then running certified components in my note are not. How do you tell then? If a part fails? How do which part to sell or send, you know, for a replacement? If you're just running a reference architecture, you have no way to say the part the hard drive that failed, the one that was sent to the customer to replace whether that is certified for Azure Stack HCI or not? You know, what, how do you really make a determination, what is the right firmware that needs to be applied to a cluster of what other drivers that apply to be cluster, that are compliant and tested for Azure Stack HCI. None of these things are possible, if you just have a reference architecture approach. That's why we have been very clear that our approach is a product-based approach. And, you know, very frankly this is how we have... that's the feedback we've provided the Microsoft to, and we've been working very, you know, closely together. And you see that, now in terms of the new Azure Stack HCI, that Microsoft announced at Inspirely this year, that brings Microsoft into the mainstream HCI space as a product offering, and not just as a feature or a few features within the Windows Server program. >> Greg, I saw in the notes with respect to Swiff-Train that you guys have with Azure Stack HCI, you have reduced Rackspace by 50%, you talked about some of the Rackspace benefits. But you've also reduced energy by 70%. Those are big, impactful numbers, impacting not just your day-to-day but the overall business. >> That's true, >> Last question for you, Greg. If you think about how can you just describe the difference between an all in one validated HCI solution versus a reference architecture. For your peers watching in any industry. what's your... what are your top recommendations for going with a validated all in one solution? >> Well, we looked at doing the reference architecture's path, if you will, because we're hands on we like to build things and I looked at it and like Puneet said, "Drivers and memory and making sure that everything is going to work well together." And not only that everything is going to work well together. But when something fails, then you get into the finger pointing between vendors, your storage vendor, your process vendor, that's not something that we need to deal with. We need to keep a business running. So we went with Dell, it's one box, you know, but one box per unit and then you Stack two of them together you have a cluster. >> You make it sound so easy. >> Let us question-- >> I put together children's toys that were harder than building the Stack I promise you, I did it in an afternoon. >> Music to my ears Greg, thank you. (Greg giggles) >> It was that easy >> That is gold >> Easier to put together Azure Stack HCI than some, probably even opening the box of some children's toys I can imagine. (all chuckling) >> We should use that as a tagline. >> Exactly. You should, I think you have a new tagline there. Greg, thank you. Puneet, well last question for you, Would Dell Technologies World sessions on hybrid cloud benefits with Dell and Microsoft? Give us a flavor of what some of the things are that the audience will have a chance to learn. >> Yeah, this is a great session with Microsoft that essentially provides our customers an overview of our joint hybrid cloud solutions, both for Microsoft Azure Stack Hub, Azure stack HCI as well as our joint solutions on VMware in Azure. But much more importantly, we also talk about what's coming next. Now, especially with Microsoft as your Stack at CIO's a full blown product. Hyper hybrid, you know, HCI offering that will be available as, Azure service. So customers could run on-prem infrastructure that is Hyper-converged but managed pay bill for as an Azure service, so that they have always the latest and greatest from Microsoft. And all the product differentiation we have created in terms of a product-centric approach, simpler lifecycle management will all be applicable, in this new hybrid, hybrid cloud solution as well. And that led essentially a great foundation for our customers who have standardized on Hyper-V, who are much more aligned to Azure, to not worry about the infrastructure on-prem. But start taking advantages of both the modernization benefits of HCI. But much more importantly, start coupling back with the hybrid ecosystem that we are building with Microsoft, whether it's running an Azure Kubernetes service on top to modernize the new applications, and bringing the Azure data services such as Azure SQL Server on top, so that you have a consistent, vertically aligned hybrid cloud infrastructure Stack that is not only easy to manage, but it is modern, it is available as a pay as you go option. And it's tightly integrated into Azure, so that you can manage all your on-prem as well as public cloud resources on one single pane of glass, thereby providing customers whole lot more simplicity, and operational efficiency. >> And as you said, the new tagline said from, beautifully from Greg's mouth, "The customer easier to put together than many children's toys." Puneet, thank you so much for sharing with us what's going on with Azure Stack HCI, what folks can expect to learn and see at Dell Tech World of virtual experience. >> Thank you. >> And Greg, thank you for sharing the story, what you're doing. Helping your peers learn from you. And I'm going to say on behalf of Dell Technologies, that awesome new tagline. That was cool. (Greg chuckles) (Lisa chuckles) >> Thank you. 'Preciate your time. >> We're going to use it for sure. (Greg chuckles) >> All right, for Puneet Dhawan and Greg Altman. I'm Lisa Martin. You're watching theCUBE's coverage of Dell Technologies World, the Digital Experience. (soft music)

Published Date : Oct 21 2020

SUMMARY :

to you by Dell Technologies. Puneet great to see you today. all the value that Puneet's Thank you. Talk to us about that? that are aligned to key Talk to us about Azure Stack HCI. some of the applications down to on-prem, How is that helping you to so that customers have one place to go, switch over to you now, that makes the migration work easier. allow the business to have more agility that make the business money. and the support as, as Puneet talked about and stuff that we had to do. from describing his role, that you know, into one, well you know, Greg, I saw in my notes that you had this And all that leads to that all the hardware, to Swiff-Train that you guys the difference between and then you Stack two of them than building the Stack I promise you, Music to my ears Greg, probably even opening the are that the audience will so that you can manage all your on-prem And as you said, And Greg, thank you 'Preciate your time. We're going to use it for sure. the Digital Experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

PuneetPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Greg AltmanPERSON

0.99+

GregPERSON

0.99+

Puneet DhawanPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

DellORGANIZATION

0.99+

TexasLOCATION

0.99+

eightQUANTITY

0.99+

FloridaLOCATION

0.99+

LouisianaLOCATION

0.99+

Puneet DhawanPERSON

0.99+

one boxQUANTITY

0.99+

ArkansasLOCATION

0.99+

NutanixORGANIZATION

0.99+

fiveQUANTITY

0.99+

OklahomaLOCATION

0.99+

USLOCATION

0.99+

Swiff-Train CompanyORGANIZATION

0.99+

70%QUANTITY

0.99+

nineQUANTITY

0.99+

50%QUANTITY

0.99+

twoQUANTITY

0.99+

six yearsQUANTITY

0.99+

Azure Stack HCITITLE

0.99+

two companiesQUANTITY

0.99+

six hoursQUANTITY

0.99+

Swiff-TrainORGANIZATION

0.99+

10 different placesQUANTITY

0.99+

three rackQUANTITY

0.99+

Dell TechORGANIZATION

0.99+

bothQUANTITY

0.99+

Windows 10TITLE

0.99+

Peter Guagenti, Cockroach Labs | DockerCon 2020


 

>> Male narrator: From around the globe, it's the CUBE with digital coverage of DockerCon Live 2020 brought to you by Docker and its ecosystem partners. >> Hey, welcome back everyone to the DockerCon Virtual Conference. DockerCon 20 being held digitally online is the CUBE's coverage. I'm John for your host of the CUBE. This is the CUBE virtual CUBE digital. We're getting all the remote interviews. We're here in our Palo Alto studio, quarantined crew, all getting the data for you. Got Peter Guangeti who's the Chief Marketing Officer Cockroach Labs, a company that we became familiar with last year. They had the first multicloud event in the history of the industry last year, notable milestone. Hey first, it's always good you're still around. So first you got the first position, Peter. Great to see you. Thanks for coming on the CUBE for DockerCon 20. >> Thank you, John. Thanks for having me. >> So it's kind of interesting, I mentioned that tidbit to give you a little bit of love on the fact that you guys ran or were a part of the first multicloud conference in the industry. Okay, now that's all everyone's talking about. You guys saw this early. Take a minute to explain Cockroach Labs. Why you saw this trend? Why you guys took the initiative and took the risk to have the first ever multicloud conference last year? >> So that's news to me that we were the first, actually. That's a bit of a surprise, cause for us we see multicloud and hybrid cloud as the obvious. I think the credit really for this belongs with folks like Gartner and others who took the time to listen to their customer, right? Took the time to understand what was the need in the market, which, you know, what I hear when I talk to CEOs is cloud is a capability, not a place, right? They're looking at us and saying, "yes, I have a go to cloud strategy, "but I also have made massive investments in my data center. "I believe I don't want to be locked in yet again "to another vendor with proprietary PIs, "proprietary systems, et cetera." So, what I hear when I talk to customers is, "I want to be multicloud show me how, "show me how to do that in a way "that isn't just buying from multiple vendors, right?" Where I've cost arbitrage, show me a way where I actually use the infrastructure in a creative way. And that really resonates with us. And it resonates with us for a few reasons. First is, we built a distributed SQL database for a reason, right? We believed that what you really need in the modern age for global applications is something that is truly diverse and distributed, right? You can have a database that behaves like a single database that lives in multiple locations around the world. But then you also have things like data locality. It's okay with German data stays in Germany because of German law. But when I write my application, I never write each of these things differently. Now, the other reason is, customers are coming to us and saying, "I want a single database that I can deploy "in any of the cloud providers." Azure SQL, and that is a phenomenal product. Google Spanner is a phenomenal product. But once I do that, I'm locked in. Then all I have is theirs. But if I'm a large global auto manufacturer, or if I'm a startup, that's trying to enter multiple markets at the same time. I don't want that. I want to be able to pick my infrastructure and deploy where I want, how I want. And increasingly, we talk to the large banks and they're saying, "I spent tens or even hundreds of millions of dollars "on data centers. "I don't want to throw them out. "I just want better utilization. "And the 15 to 20% that I get "from deploying software on bare metal, right? "I want to be able to containerize. "I want to be able to cloudify my data center "and then have ultimately what we see more and more "as what they call a tripod strategy "where your own data center and two cloud providers "behaving as a single unit "for your most important applications." >> That's awesome. I want to thank you for coming on to, for DockerCon 20, because this is an interesting time where developers are going to be called to the table in a very aggressive way because of COVID-19 crisis is going to accelerate until they pull the future forward ahead of most people thought. I mean, we, in the industry, we are inside the ropes, if you will. So we've been talking about stainless applications, stateful databases, and all the architectural things that's got that longer horizon. But this is an interesting time because now companies are realizing from whether it's the shelter in place at scale problems that emerge to the fact that I got to have high availability at a whole nother level. This kind of exposes a major challenge and a major opportunity. We're expecting projects to be funded, some not to be funded, things to move around. I think it's going to really change the conversation as developers get called in and saying, "I really got to look at my resources at scale. "The database is a critical one because you want data "to be part of that, this data plane, if you will, "across clouds." What's your reaction to this? Do you agree with that, the future has been pulled forward? And what's Cockroach doing to help developers do manage this? >> Yeah, John, I think you're exactly right. And I think that is a story that I'm glad that you're telling. Because, I think there's a lot of signal that's happening right now. But we're not really thinking about what the implications are. And we're seeing something that's I think quite remarkable. We're seeing within our existing customer base and the people we've been talking to, feast or famine. And in some cases, feast and famine in the same company. And what does that really mean? We've looked at these graphs for what's going to happen, for example, with online delivery services. And we've seen the growth rates and this is why they're all so valued. Why Uber invested so big in Uber eats and these other vendors. And we've seen these growth rates the same, and this is going to be amazing in the next 10 years, we're going to have this adoption. That five, 10 years happened overnight, right? We were so desperate to hold onto the things that are what mattered to us. And the things that make us happy on any given day. We're seeing that acceleration, like you said. It's all of that, the future got pulled forward, like you had said. >> Yeah. >> That's remarkable, but were you prepared for it? Many people were absolutely not prepared for it, right? They were on a steady state growth plan. And we have been very lucky because we built an architecture that is truly distributed and dynamic. So, scaling and adding more resilience to a database is something we all learned to do over the last 20 years, as data intensive applications matter. But with a distributed SQL and things like containerization on the stateless side, we know we can just truly elastically scale, right? You need more support for the application of something like Cockroach. You literally just add more nodes and we absorb it, right? Just like we did with containerization, where you need more concurrency, you just add more containers. And thank goodness, right, because I think those who were prepared for those things need to be worked with one of the large delivery services. Overnight, they saw a jump to what was their peak day at any point in time now happening every single day. And they were prepared for that because they already made these architectural decisions. >> Yeah. >> But if you weren't in that position, if you were still on legacy infrastructure, you were still trying to do this stuff manually, or you're manually sharding databases and having to increase the compute on your model, you are in trouble and you're feeling it. >> That's interesting Peter to bring that up and reminds me of the time, if you go back in history a little bit, just not too far back, I mean, I'm old enough to go back to the 80s, I remember all the different inflection points. And they all had their key characteristics as a computer revolution, TCP IP, and you pick your spots, there's always been that demarcation point or lions in where things change. But let's go back to around 2004 and then 2008. During that time, those legacy players out there kind of was sitting around, sleeping at the switch and incomes, open-source, incomes, Facebook, incomes, roll your own. Hey, I'm going to just run. I'm going to run open-source. I'm going to build my own database. And that was because there was nothing in the market. And most companies were buying from general purpose vendors because they didn't have to do all the due diligence. But the tech-savvy folks could build their own and scale. And that changed the game that became the hyperscale and the rest is history. Fast forward to today, because what you're getting at is, this new inflection point. There's going to be another tipping point of trajectory of knowledge, skill that's completely different than what we saw just a year ago. What's your reaction to that? >> I think you're exactly right. We saw and I've been lucky enough, same like you, I've been involved in the web since the very early days. I started my career at the beginning. And what we saw with web 1.0 and the shift to web 2.0, web 2.0 would not have happened without source. And I don't think we give them enough credit if it wasn't for the lamp stack, if it wasn't for Linux, if it wasn't for this wave of innovation and it wasn't even necessarily about rolling around. Yeah, the physics of the world to go hire their own engineers, to go and improve my SQL to make it scale. That was of course a possibility. But the democratization of that software is where all of the success really came from. And I lived on both sides of it in my career, as both an app developer and then as a software executive. In that window and got to see it from both sides and see the benefit. I think what we're entering now is yet another inflection point, like you said. We were already working at it. I think, the move from traditional applications with simple logic and simple rules to now highly data intensive applications, where data is driving the experience, models are driving the experience. I think we were already at a point where ML and AI and data intensive decision-making was going to make us rewrite every application we had and not needed a new infrastructure. But I think this is going to really force the issue. And it's going to force the issue at two levels. First is the people who are already innovating in each of these industries and categories, were already doing this. They were already cloud native. They were already built on top of very modern third generation databases, third generation programming languages, doing really interesting things with machine learning. So they were already out innovating, but now they have a bigger audience, right? And if you're a traditional and all of a sudden your business is under duress because substantial changes in what is happening in the market. Retailers still had strength with footprint as of last year, right? We don't be thinking about e-commerce versus traditional retail. Yeah, it was on a slow decline. There were lots of problems, but there was still a strength there, that happened changed overnight. Right now, that new sources have dried up, so what are you going to do? And how are you going to act? If you've built your entire business, for example, on legacy databases from folks like Oracle and old monolithic ways of building out patients, you're simply not adaptable enough to move with changing times. You're going to have to start, we used to talk about every company needed to become a software company. That mostly happened, but they weren't all very good software companies. I would argue that the next generation used to to be a great software company and great data scientists. We'll look at the software companies that have risen to prominence in the last five to 10 years. Folks like Facebook, folks like Google, folks like Uber, folks like Netflix, they use data better than anyone else in their category. So they have this amazing app experience and leverage data and innovate in such a way that allow them to just dominate their category. And I think that is going to be the change we see over the next 10 years. And we'll see who exits what is obviously going to be a jail term. We'll see who exits on top. >> Well, it's interesting to have you on. I love the perspective and the insights. I think that's great for the folks out there who haven't seen those ways before. Again, this wave is coming. Let's go back to the top when we were talking about what's in it for the developer. Because I believe there's going to be not a renaissance, cause it's always been great, but the developers even more are going to be called to the front lines for solutions. I mean, these are first-generation skill problems that are going to be in this whole next generation, modern era. That's upon us. What are some of the things that's going to be that lamp stack, like experience? What are some of the things that you see cause you guys are kind of at a tail sign, in my opinion, Cockroach, because you're thinking about things in a different construct. You're thinking about multicloud. You're thinking about state, which is a database challenge. Stateless has kind of been around restful API, stateless data service measures. Kubernetes is also showing a cloud native and the microservices or service orientation is the future. There's no debate on that. I think that's done. Okay, so now I'm a developer. What the hell am I going to be dealing with for the next five years? What's your thoughts? >> Well, I think the developer knows what they're already facing from an app perspective. I think you see the rapid evolution in languages, and then, in deployment and all of those things are super obvious. You need just need to go and say I'm sure that all the DockerCon sessions to see what the change to deployment looks like. I think there are a few other key trends that developers should start paying attention to, they are really critical. The first one, and only loosely related to us, is ML apps, right? I think just like we saw with dev and ops, suddenly come together so we can actually develop and deploy in a super fast iterative manner. The same things now are going to start happening with data and all of the work that we do around deploying models. And I think that that's going to be a pretty massive change. You think about the rise of tools like TensorFlow, some of the developments that have happened inside of the cloud providers. I think you're seeing a lot there as a developer, you have to start thinking as much like a data scientist and a data engineer as simply somebody writing front end code, right? And I think that's a critical skill that the best developers already building will continue. I think then the data layer has become as important or more important than any other layer in the stack because of this. And you think about once again, how the leaders are using data and the interesting things that they're doing, the tools you use matter, right? If you are spending a lot of your time trying to figure out how to shard something how to make it scale, how to make it durable when instead you should be focused on just the pure capability, that's a ridiculous use of your time, right? That is not a good use of your time. We're still using 20 to 25 year old open-source databases for many of these applications when they gave up their value probably 10 years ago. Honestly, you know, we keep all paper over it, but it's not a great solution. And unfortunately, no SQL will fix some of the issues with scaling elasticity, it's like you and I starting a business and saying, "okay, everyone speaks English, "but because we're global, "everyone's going to learn Esperanto, right?" That doesn't work, right? So works for a developer. But if you're trying to do something where everyone can interact, this is why this entire new third generation of new SQL databases have risen. We took the distributed architecture SQL. >> Hold up for a second. Can you explain what that means? Cause I think a key topic. I want to just call that out. What is this third generation database mean? Sorry, I speak about it. Like everyone sees it. >> I think it's super important. It's just a highlight. Just take a minute to explain it and we can get into it. There is an entire new wave of database infrastructure that has risen in the last five years. And it started actually with Google. So it started with Google Spanner. So Google was the first to face most of these problems, right? They were the first to face web scale. At least at the scale, we now know it. They were the first to really understand the complexity of working with data. They have their own no SQL. They have their own way of doing things internally and they realized it wasn't working. And what they really needed was a relational database that spoke traditional ANSI SQL, but scaled, like there are no SQL counterparts. And there was a white paper that was released. That was the birth of Spanner. Spanner was an internal product for many, many years. They released the thinking into the wild and then they just started this way with innovation. That's where our company came from. And there were others like us who said, "you're right. "Let's go build something that behaves," like we expect a database to behave with structure and this relational model and like anyone can write simple to use it. It's the simplest API for most people with data, but it behaves like all the best distributed software that we've been using. And so that's how we were born. Our company was founded by ex Googlers who had lived in this space and decided to go and scratch the itch, right? And instead of doing a product that would be locked into a single cloud provider, a database that could be open-source, it could be deployed anywhere. It could cross actual power providers without hiccups and that's been the movement. And it's not just us, there were other vendors in this space and we're all focused on really trying to take the best of the both worlds that came before us. The traditional relational structure, the consistency and asset compliance that we all loved from tools like Oracle, right? And Microsoft who we really enjoyed. But then the developer friendly nature and the simple elastic scalability of distributed software and, that's what we're all seeing. Our company, for example, has only been selling a product for the last two years. We found it five years ago, it took us three years just to rank in the software that we would be happy selling to a customer. We're on what we believe is probably a 10 to 15 year product journey to really go and replace things like Oracle. But we started selling the product two years ago and there is 300% growth year over year. We're probably one of the fastest growing software companies in America, right? And it's all because of the latent demand for this kind of a tool. >> Yeah, that's a great point. I'm a big fan of this third wave. Can I see it? If you look at just the macro tailwinds in the industry, billions of edged devices, immersion of all kinds of software. So that means you can't have one database. I always said to someone, in (mumbles) and others. You can't have one database. It's physically impossible. You need data and whatever database fits the scene, wherever you want to have data being stored, but you got to have it real time. You got to have actionable, you have to have software intelligence into how to manage the data. So I think the data control plane or that layer, I think it's the next interoperability wave. Because without data, nothing really works. Machine learning doesn't really work well. You want the most data. I think cybersecurity is a great early use case because they have to leverage data fast. And so you start to see some interesting financial services, cyber, what's your thoughts on this? Can you share from the Cockroach Labs perspective, from your database, you've got a cloud. What are some of the adoption use cases? Who are those leaders? You can name names if you have them, if not, name the use case. What's the Cockroach approach? Who's winning with it? What's it look like? >> Yeah, that's a great question. And you nailed it, right? The data volumes are so large and they're so globally distributed. And then when you start layering again, the data streaming in from devices that then have to be weighed against all of these things. You want a single database. But you need one that will behave in a way that's going to support all of that and actually is going to live at the edge like you're saying. And that's where we have been shining. And so our use cases are, and unfortunate, I can't name any names, but, for example, in retail. We're seeing retailers who have that elasticity and that skill challenge with commerce. And what they're using us for is then, we're in all of the locations where they do business, right? And so we're able to have data locality associated with the businesses and the purchases in those countries. And however, only have single apps that actually bridge across all of those environments. And with the distributed nature, we were able to scale up and scale down truly elastically, right? Because we spread out the data across the nodes automatically. And, what we see there is, you know, retailers do you have up and down moments? Can you talk about people who can leverage the financial structure of the cloud in a really thoughtful way? Retail is a shining example of that. I remember having customers that had 64 times the amount of traffic on cyber Monday that they had on the average day. In the old data center world, that's what you bought for. That was horrendous. In a cloud environment, still horrendous, even public cloud providers. If you're having to go and change your app to ramp every time, that's a problem with something like a distributed database. and with containerization, you could scale much more quickly and scale down much more. That's a big one for streaming media, is another one. Same thing with data locality in each of these countries, you think about it, somebody like Netflix or Hulu, right? They have shows that are unique to specific countries, right? They haven't have that user behavior, all that user data. You know data sovereignty, you know, what you watch on Netflix, there's some very rich personal data. And we all know how that metadata has been used against people. Or so it's no surprise that you now have countries that I know there's going to be regulation around where that data can live and how it can. And so once again, something like Cockroach where you can have that global distribution, but take a locality, or we can lock data to certain nodes in certain locations. That's a big one. >> There's no doubt in my mind. I think there's such a big topic. We probably do more interviews just on the COVID-19 data problem that they have. The impact of getting this right, is a nerd problem today. But it is a technology solution for society globally in the future. Zero doubt in my mind on that. So, Peter, I want you to get the last word and to give a plugin to the developers that are watching out there about Cockroach. Why should they engage with you guys? What can you offer? Is there anything new you want to share about the company to the audience here at DockerCon 2020? Take us home in the next segment. >> Thank you, John. I'll keep the sales pitch to a minimum. I'm a former developer myself. I don't like being sold, so I appreciate it. But we believe we're building, what is the right database for the coming wave of cognitive applications. And specifically we've built what we believe is the ideal database for distributed applications and for containerized applications. So I would strongly encourage you to try it. It is open-source. It is truly cloud native. We have free education, so you can try it yourself. And once you get into it, it is traditional SQL that behaves like Postgres and other tools that you've already known of. And so it should be very familiar, you know, if you've come up through any of these other spaces will be very natural. Postgres compatible integrates with a number of ORM. So as a developer, just plugged right into the tools you use and we're on a rapid journey. We believe we can replace that first generation of technology built by the Oracles of the world. And we're committed to doing it. We're committed to spending the next five to 10 years in hard engineering to build that most powerful database to solve this problem. >> Well, thanks for coming on, sharing your awesome insight and historical perspective. get it out of experience. We believe and we want to share the audience in this time of crisis, more than ever to focus on critical nature of operations, because coming out of this, it is going to be a whole new reality. And I think the best tech will win the day and people will be building new things to grow, whether it's for profit or for societal benefit. The impact of what we do in the next year or two will determine a big trajectory and new technology, new approaches that are dealing with the realities of infrastructure, scale, working at home , sheltering in place to coming back to the hybrid world. We're coming virtualized, Peter. We've been virtualized, the media, the lifestyle, not just virtualization in the networking sense, but, fun times it was going to be challenging. So thanks for coming on. >> Thank you very much, John. >> Okay, we're here for DockerCon 20 virtual conferences, the CUBE Virtual Segment. I want to thank you for watching. Stay with me. We've got stream all day today and check out the sessions. Jump in, it's going to be on demand. There's a lot of videos it's going to live on and thanks for watching and stay with us for more coverage and analysis. Here at DockerCon 20, I'm John Furrier. Thanks for watching >> Narrator: From the CUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is the CUBE conversation.

Published Date : May 29 2020

SUMMARY :

brought to you by Docker in the history of the industry Thanks for having me. I mentioned that tidbit to "And the 15 to 20% that I get I think it's going to really and this is going to be for the application of and having to increase And that changed the game and the shift to web 2.0, What are some of the things that you see the tools you use matter, right? Cause I think a key topic. And it's all because of the latent demand I always said to someone, that then have to be weighed about the company to the the next five to 10 years in the next year or two and check out the sessions. This is the CUBE conversation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

UberORGANIZATION

0.99+

PeterPERSON

0.99+

15QUANTITY

0.99+

20QUANTITY

0.99+

John FurrierPERSON

0.99+

Peter GuangetiPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GermanyLOCATION

0.99+

Peter GuagentiPERSON

0.99+

AmericaLOCATION

0.99+

10QUANTITY

0.99+

Cockroach LabsORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

64 timesQUANTITY

0.99+

fiveQUANTITY

0.99+

NetflixORGANIZATION

0.99+

2008DATE

0.99+

GoogleORGANIZATION

0.99+

FirstQUANTITY

0.99+

tensQUANTITY

0.99+

DockerORGANIZATION

0.99+

three yearsQUANTITY

0.99+

HuluORGANIZATION

0.99+

both sidesQUANTITY

0.99+

GartnerORGANIZATION

0.99+

last yearDATE

0.99+

OracleORGANIZATION

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

both sidesQUANTITY

0.99+

FacebookORGANIZATION

0.99+

CockroachORGANIZATION

0.99+

2004DATE

0.99+

two levelsQUANTITY

0.99+

two years agoDATE

0.99+

DockerCon 20EVENT

0.99+

COVID-19OTHER

0.99+

15 yearQUANTITY

0.99+

DockerConEVENT

0.99+

eachQUANTITY

0.99+

bothQUANTITY

0.99+

five years agoDATE

0.98+

20%QUANTITY

0.98+

25 yearQUANTITY

0.98+

next yearDATE

0.98+

80sDATE

0.98+

EnglishOTHER

0.98+

single appsQUANTITY

0.98+

BostonLOCATION

0.98+

first oneQUANTITY

0.98+

both worldsQUANTITY

0.98+

first positionQUANTITY

0.97+

first generationQUANTITY

0.97+

two cloud providersQUANTITY

0.97+

third generationQUANTITY

0.97+

DockerCon Live 2020EVENT

0.97+

hundreds of millions of dollarsQUANTITY

0.97+

CUBEORGANIZATION

0.97+

a year agoDATE

0.96+

10 yearsQUANTITY

0.96+

SQLTITLE

0.96+

LinuxTITLE

0.96+

single databaseQUANTITY

0.96+

Joachim Hammer, Microsoft | Microsoft Ignite 2018


 

>> Live from Orlando, Florida. It's theCUBE. Covering Microsoft Ignite. Brought to you by Cohesity, and theCUBE's ecosystem partners. >> Welcome back everyone to theCUBE's live coverage of Microsoft Ignite here in Orlando, Florida. I'm your host, Rebecca Knight along with my cohost Stu Miniman. We're joined by Joachim Hammer, he is the Principal Product Manager at Microsoft. Thanks so much for coming on the show. >> Sure, you're welcome. Happy to be here. >> So there's been a lot of news and announcements with Azure SQL, can you sort of walk our viewers through a little bit about what's happened here at Ignite this week? >> Oh sure thing, so first of all I think it's a great time to be a customer of Azure SQL Database. We have a lot of innovations, and the latest one that we're really proud of, and we're just announced GA is SQL Managed Instance. So our family of database offers had so far a single database and then a pool of databases where you could do resource sharing. What was missing was this one ability for enterprise customers to migrate their workloads into Azure and take advantage of Azure without having to do any rewriting or refactoring and Managed Instance does exactly this. It's a way for enterprise customers to take their workloads, migrate them, it has all the features that they are used to from sequel server on-prem including all the security, which is of course as you can imagine always a concern in the cloud where you need to have the same or better security that customers are used to from on-prem, and with Managed Instance we have the security isolation, we have private IPV nets, we have all the intelligent protection that we have in Azure so it's a real package. And so this is a big deal for us, and the general purpose went GA yesterday actually, so I heard. >> Security's really interesting 'cause of course database is at the core of so many customer's businesses. You've been in this industry for a while, what do you see from customers as to the drivers and the differences of going to public cloud deployments versus really owning their database in-house and are security meeting the needs of what customers need now? >> Yeah sure, so, you're right, security is probably the most important topic or one of the most important topics that comes up when you discuss the cloud. And what customers want is they want a trust, they want this trust relationship that we do the right thing and doing the right thing means we have all the compliances, we adhere to all the privacy standards, but then we also offer them state of the art security so that they can rely on Microsoft on Azure for the next however many years they want to use the cloud to develop customer leading-edge security. And we do this for example with our encryption technology with Always Encrypted. This is one of those technologies that helps you protect your database against attacks by encrypting sensitive data and the data remains encrypted even though we process queries against it. So we protect against third-party attacks on the database, so Always Encrypted is one of those technologies that may not be for everybody today but customers get the sense that yes, Microsoft is thinking ahead, they're developing this security offering, and I can trust them that they continue to do this, keep my data safe and secure. >> Trust is so fundamental to this whole entire enterprise. How do you build trust with your customers, I mean you have the reputation, but how do you really go about getting your customers to say "Okay, I'm going to board your train?" >> That's a good question, Rebecca. I think as I said it starts with the portfolio of compliance requirements that we have and that we provide for Azure's SQL Database and all the other Azure services as well. But it also goes beyond that, it goes, for example, we have this right to audit capability in Azure where a company can come to us and says we want to look behind the scenes, we want to see what auditors see so that we can really believe that you are doing all the things you're saying. You're updating your virus protection, you're patching and you have all the right administrative workflows. So this is one way for us to say our doors are open if you want to come and see what we do, then you can come and peek behind the scenes so to speak. And then the other, the third part is by developing features like we do that help customers, first of all make it easy to secure the database, and help them understand vulnerabilities, and help them understand the configurations of their database and then implement the security strategy that they feel comfortable with and then letting them move that strategy into the cloud and implement it, and I think that's what we do in Azure, and that's why we've had so much success so far. >> Earlier this week we interviewed one of your peers, talked about Cosmos DB. >> Okay. >> There's a certain type of scale we talk about there. Scale means different things to different sized customers. What does scale mean in your space? >> Yeah so you're right, scale can mean a lot of different things, and actually thank you for bringing this up so we have another announcement that we made on namely Hyper-Scale architecture. So far in Azure SQL DB, we were pretty much constrained in terms of space by the underlying hardware, how much storage comes on these VMs, and thanks to our re-architectured hardware, sorry software, we now have the ability to scale way beyond four terabytes which is the current scale of Azure SQL DB. So we can go to 64 terabytes, 100 terabytes. And we can, not only does that free up, free us from the limitations, but it also keeps it simple for customers. So customers don't have to go and build a complicated scale out architecture to take advantage of this. They can just turn a knob in a portal, and then we give them as much horsepower as they need to include in the storage. And in order for this to happen, we had to do a lot of work. So it doesn't just mean, we didn't just re-architect storage but we also have to make fail-over's faster. We have to continue to invest in online operations like online index rebuild and create to make those resumable, pause and resumable, so that with bigger and bigger databases, you can actually do all those activities that you used to do ya know, without getting in the way of your workloads. So lot of work, but we have Hyper-Scale now in Azure SQL DB and so I think this is another sort of something that customers will be really excited about. >> Sounds like that could have been a real pain point for a lot of DBA's out there, and I'm wondering, I'm sure, as a PM, you get lots of feedback from customers. What are the biggest challenges they're facing? What are some of the things they're excited about that Microsoft's helping them with these days? >> So you're right, this was a big pain point, because if you go to a big enterprise customer and say, hey bring your workload to Azure, and then they say oh yeah great, we've got this big telemetry database, what's your size limit? And you have to say four terabytes, that doesn't go too well. So that's one thing, we've removed that blocker thankfully. Other pain points I think we have by and large, I think the large pain points are we've removed, I think we have small ones where we're still working on making our deployments less painful for some customers. There's customers who are really, really sensitive to disconnects or latent variations in latency. And sometimes when we do deployments, worldwide deployments, we are impacting somebody's customer, so this is a pain point that we're currently working on. Security, as you said, is always a pain point, so this is something that will stay with us, and we just have to make sure that we're keeping up with the security demands from customers. And then, another pain point, or has been a pain point for customers, especially customers sequel server on-prem is the performance tuning. When you have to be a really, really good DBA to tune your workloads well, and so this is something that we are working on in Azure SQL DB with our intelligence performance tuning. This is a paint point that we are removing. We've removed a lot of it already. There's still, occasionally, there's still customers who complaining about performance and that's understood. And this is something that we're also trying to help them with, make it easier, give 'em insights into what their workload is doing, where are the weights, where are the slow queries, and then help them diffuse that. >> So thinking about these announcements and the changes that you've made to improve functionality and increase, not have size limits be such a road block, when you're thinking ahead to making the database more intelligent, what are some of the things you're most excited about that are still in progress right now, still in development, that we'll be talking about at next year's Ignite? >> Yeah, so personally for me on the security side, what's really exciting to me is the, so security's a very complicated topic, and not all of our customers are fully comfortable figuring out what is my security strategy and how do I implement it, and is my data really secure. So understanding threats, understanding all this technology, so I think one of the visions that gets me excited about the potential of the cloud, is that we can make security in the future hopefully as easy as we were able to make query processing with the invention of the relational model, where we made this leap from having to write code to access your data to basically a declarative SQL type language where you say this is what I want and I don't care how to database system returns it to me. If you translate that to security, what would be ideal the sort of the North Star, is to tell it to have customers in some sort of declarative policy based manner, say I have some data that I don't trust to the cloud please find the sensitive information here, and then protect it so that I'm meeting ISO or I'm meeting HIPPA requirements or that I'm meeting my internal ya know, every company has internal policies about how data needs to be secured and handled. And so if you could translate that into a declarative policy and then upload that to us, and we figure out behind the scenes these are the things we need, you need to turn on auditing, these are where the audit events have to go, and this is where the data has to be protected. But before all that, we actually identify all the sensitive data for you, we'll tag it and so forth. That to me has been a tremendous, sort of untapped potential of the cloud. That's where I think this intelligence could go potentially. >> Yeah, great. >> Who knows, maybe. >> (laughs) Well, we shall see at next year's Ignite. >> We are making handholds there. We have a classification engine that helps customers find sensitive data. We have a vulnerability assessment, a rules engine that allows you to basically test the configuration of your database against potential vulnerabilities, and we have threat detection. So we have a lot of the pieces, and I think the next step for us is to put these all together into something that can then be much more automated so that a customer doesn't have to think technology anymore. They can they business. They can think about the kinds of compliances they have to meet. They can think about, based on these compliances, this data can go this month, this data can go maybe next year, or ya know, in that kind of terms. So I think, that to me is exciting. >> Well Joachim, thank you so much for coming on theCUBE. It was a pleasure having you here. >> It was my pleasure too. Thank you. >> I'm Rebecca Knight for Stu Miniman, we'll have more from theCUBE's live coverage of Microsoft Ignite coming up in just a little bit. (upbeat music)

Published Date : Sep 25 2018

SUMMARY :

Brought to you by Cohesity, Thanks so much for coming on the show. Happy to be here. we have all the intelligent protection that and the differences of going to public cloud deployments And we do this for example with our encryption Trust is so fundamental to this whole entire enterprise. so that we can really believe that you are Earlier this week we interviewed one of your peers, There's a certain type of scale we talk about there. And in order for this to happen, we had to do a lot of work. What are some of the things they're excited about and so this is something that we are working on in these are the things we need, you need to turn on auditing, and we have threat detection. It was a pleasure having you here. It was my pleasure too. of Microsoft Ignite coming up in just a little bit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoachimPERSON

0.99+

Rebecca KnightPERSON

0.99+

Joachim HammerPERSON

0.99+

RebeccaPERSON

0.99+

100 terabytesQUANTITY

0.99+

Stu MinimanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

next yearDATE

0.99+

64 terabytesQUANTITY

0.99+

Orlando, FloridaLOCATION

0.99+

yesterdayDATE

0.99+

third partQUANTITY

0.99+

theCUBEORGANIZATION

0.98+

SQLTITLE

0.98+

Earlier this weekDATE

0.98+

one wayQUANTITY

0.98+

CohesityORGANIZATION

0.98+

AzureTITLE

0.97+

Azure SQLTITLE

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

North StarORGANIZATION

0.97+

four terabytesQUANTITY

0.96+

Hyper-ScaleTITLE

0.96+

this weekDATE

0.95+

Azure SQL DBTITLE

0.94+

one thingQUANTITY

0.94+

this monthDATE

0.93+

HIPPAORGANIZATION

0.93+

Azure SQL DatabaseTITLE

0.92+

AzureORGANIZATION

0.9+

ISOORGANIZATION

0.89+

single databaseQUANTITY

0.89+

firstQUANTITY

0.89+

Microsoft IgniteORGANIZATION

0.84+

Cosmos DBTITLE

0.77+

SQL DBTITLE

0.75+

SQL Managed InstanceTITLE

0.68+

ScaleTITLE

0.67+

HyperTITLE

0.67+

IgniteTITLE

0.63+

IgniteORGANIZATION

0.58+

Always EncryptedTITLE

0.56+

Microsoft Ignite 2018EVENT

0.48+

ndQUANTITY

0.47+

InstanceORGANIZATION

0.45+

IgniteEVENT

0.45+

ManagedTITLE

0.35+

Jagane Sundar, WANdisco | CUBEConversation, May 2018


 

(intense orchestral music) >> Hi I'm Peter Burris, welcome to another CUBEConversation. Today we've got a special guest from WANdisco, Jagane Sundar, who's the CTO, Jagane, welcome to theCUBE again! >> Thanks Peter, happy to be here! >> So Jagane, we've got a lot to talk about today, WANdisco's doing a lot of new things, but clearly the industry is, itself, in the midst of a relatively important evolution. Now we at Wikibon and SiliconANGLE have been calling it the transformation to digital business. Everybody talks about this, but we've been pretty specific, we think that it boils down to how a company uses data as an asset, and the degree to which it's institutionalizing, or re-institutionalizing work around those assets. How does WANdisco see this big transformation that we're in the midst of right now? >> So, you're exactly right, businesses are transforming from traditional means to a digital based business, and the most important thing about that is the data. WANdisco is at the forefront of making your data available for your innovation. We start off with the basic use-cases, disaster recovery, that's a traditional problem that people have half-solved in many different ways, but we have the ability to solve that problem, take you to the next stage, which is what we call live data, where you don't worry about the availability or the location of your data anymore. Finally, we take you from that live data platform to a place where you can invent with your data, the freedom to invent phase of our--of what we call. Now, that's what you're calling the digital transformation and there's great synergy between our two terminologies, that's an important aspect here. >> So let me impact that a little bit, if I can. So the core notion is: that every business has to start acknowledging that data is something more than the exhaust that comes out of applications, it really is a core data asset. So let's start with this notion of backup and restore, or disaster recovery, the historical orientation is: I have these very expensive assets, typically in the form of hardware, or maybe applications, and I have to ensure that I can back those assets up. So backup restore used to be back up a device, backup a volume, backup whatever else it might be, and now it's moved to more of a backup of a virtual machine. I think we're talking about something different when we talk about your approach to backup and restore we're really talking about backing up data assets, do I have that right? >> That is correct. You have gone from a place where you are backing up PC's and Macintosh's and cellphones, to a place where the digital assets of your company, that are useful analytics, are far more important. Now, a simple backup, where you take the contents in one data center, push it to another data center, are a half-solution to the problem. What we've come up with is this notion called live data. You have multiple data centers, some of them you own, they're on premise, some of them are Cloud vendor data centers, they definitely reside in different parts of the world. Your data also is generated in different parts of the world, now all of this data goes into this data system, this platform that we've built for you, and it's available under all circumstances. If a region of a Cloud vendor goes down or if your own data center goes down, that's a non-event, because that data is available in other data centers around the world. This gives you the flexibility to treat this as a live data platform. You can write data where you want, you can read and run analytics wherever you want. You've gone from backing up PC's and phones, to actually using your digital assets in a manner such that you can make critical business decisions based on that. Imagine that insurance company that's making-- underwriting policies based on this digital data. If the data's not available, you've got a full halt on the business, that's not acceptable. If the data is not available because a specific data center went down, you can't call a full-stop to your business, you've got to make it available. Those are simple examples of how digital transformation is happening, and regular backup and DR are really inadequate to fuel your digital transformation. >> In fact, we like to think, we're advising our clients, that as they think about digital transformation, the role that data's playing, a digital business is not just backing up and restoring or sustaining or avoiding disasters associated with the data, they're really talking about backing up and restoring their entire business. That's kind of what we mean when we talk about DR in the digital business sense, disaster recovery, or backup and recovery, restore, in a digital business sense. And as you said, this notion of live data increases our ability to do that, but partly that requires a second kind of a step. By that I mean, most people think about storage, they think about where data's located in terms of persisting the data. When we talk about this new approach, we're talking about ensuring that we can deliver the data. Restore takes on more importance than backup than it has before, would you agree with that? That really talking about live data is really about being able to restore data wherever it's needed. >> It's an interesting new approach where we don't really define a primary and a backup. One of the important things about our Paxos-based replication system is that each location, or each instance, replica of your data, is exactly equal. So if you have a West Coast data center, and an East Coast data center, and a Midwest data center, and your West Coast data center happens to go down, none of the activities that you perform on your data will stop, you can continue writing your data to your Midwest and your East Coast data center, you continue writing and reading, running your applications against this data set. Now there wasn't a definition that the West Coast is primary and East Coast is backup. When a disaster strikes, we will cut over to the backup, we'll start using that, when the primary comes back, now we have to reconcile it, that's the traditional way of doing things, and it brings about some really bad attributes. Such as you need to have all your data pumped into one data center, that's counter to our philosophy. We believe that live data is where each of these replicas is equal, we build a platform for you where you can write to any of these, you can run your analytics against any of those. Once you get past that mental hurdle, what you've got is the freedom to innovate. You can look at it and go: I've got my data available everywhere, I can write to it, I can read from it, what can I do with this data? How can I quickly iterate so I can make more interesting business decisions, more relevant business decisions that will result in better business, profits and revenue. This interesting outcome is because you're now, not concerned about the availability of data or the primary, backup, and failover and failback, all those disappear from your radar. >> So let me build on that a little bit too, Jagane. So the way we would describe that is that a digital business, most have those data assets, those crucial data assets available, so that they can be delivered to applications and new activities, so we think in terms, what we call data zones, where the idea, you take a look at what your digital business value proposition is, what activities are essential to delivering on that value proposition, and then, whether or not the data is in a zone approximate to that activity, so that activity can actually be executed. So that means, from a physical standpoint, it needs to be there, from a legal standpoint, from a intellectual property control, from cost, but also from a consistency standpoint, you don't want dramatically different behaviors in your business just because the data that's over there is not consistent with the data that's over here, that's kind of what you guys are looking at. Now, ultimately that means, going out a little bit, but ultimately that means that this notion of deploying data so it serves your business now, has to also include a futures orientation. That we want to choose technologies that give us high value options on data futures as well. Is that what you mean by effectively, freedom to invent? >> It's definitely one aspect of our definition of freedom to invent. We are focused fully on complying with some of these requirements that you talked about. Regions of data, for example, there are parts of the world where you cannot take the data from that part of the world outside but often you need to do analytics in a global manner, such that if you detect a flaw or a problem that is surfaced by data in one part of the world, the chances are very good that that'll apply to this restricted zone as well. You want to be able to apply your analytics against that. Critical business decisions may need to be made, yet you cannot export that data out of that country, we facilitate such capabilities. So we've gone from a simpler primary backup type of system to a live data platform. And finally, we've given you the freedom to invent because you can now take a look at it and go I can start building applications that are in the critical business path because I'm confident of the availability of my data, the fact that we comply with all regulatory consilience things like aging out data after a certain number of months or days, we can help you do that really well with our platform. So yes, in fact the notion that data resides in different pools, in different areas, replicated consistently, available under all circumstances, enables business to think about their data in a completely different manner, up-level it. >> And satisfying physical, legal, intellectual property, and cost realities. >> Exactly. Those are all consilience that need to be addressed by this replication platform. >> So as we think about where customers are going with this clearly they've started around this backup and restore, but it sounds like you guys are helping them today conceive of what it means to do backup and restore and analytics, that is a particularly sensitive issue for a lot of businesses right now that are trying to marry together data science and good practices associated with IT. How is that playing out, can you give us some insight into how customers are doing a better job of that? >> Sure. A global auto maker that has acquired our software can do replication started off by using it for two very simple use-cases. They were looking at migrating from an older version of a data system to a newer version, we enabled them to do that without downtime, that was a clear win for us. The second thing they wanted to do was enable a disaster recovery type scenario. Once we got to that stage, we showed them how easy it was for them to continue writing to what was originally notionally the backup system, that made about twice as much compute resources available for them, because their original notion was that the backup system would just be a backup system, nothing could be done on it. Light bulbs went off in our customers head, they looked at it and went I can continue writing here, even if my primary goes down, there's no real notion of a backup, there's no real notion of failover and failback, that opened their minds to a whole bunch of new ideas. Now, they are in a position to build some business critical applications. Gone are the days when an analytics thing meant you run a report once a week and send it off to the CIO, it's not that anymore, it's up to minute accuracy, people are making things like insurance companies making underwriting decisions, and healthcare companies tracking the spread of diseases based on up to the minute information that they're getting. These are not weekly once analytics applications anymore, these are truly businesses that are based on their digital data. >> So a fundamental promise of live data is that wherever the data is, the application is live? >> Jagane: Yes, absolutely. >> Alright one more thing I think we want to talk about very quickly Jagane is there is some differences in mindset that a CIO has to apply here, again the CIO used to look at the assets and say machines, the hardware, yes, and maybe the applications, and now, to really see the value of this, they have to think of this in terms of data being the asset. How are your customers starting to evolve that notion so that they see the problem differently? >> So, I think the first thing that happened was the Cloud, we can't take credit for that, of course, but it helped our costs a great deal because people looked at infrastructure with a completely different viewpoint. They don't look at it as I'm going to buy a server with this size to run my Oracle, that mentality went away, and people started looking at, I have to store my data here and I can run an elastic application on this, I can grow our resources on demand and surrender those resources back to the Cloud when I don't need that. We take that to the next step, we enable them to have consistent replicas of their data across multiple regions of Cloud vendors, across different Cloud vendors. Suddenly they have the ability to do things like, I can run this analytics on Redshift here in Amazon really well, I can use this same data to run it on Azure SQL DW here, which is a better application for this specific use-case. We've opened up the possibilities to them, such that, they don't worry about what data they're going to use, how much resources they're going to get, resources are truly elastic now, you can buy and surrender resources, as per your demand, so it's opening up possibilities that they never had before. >> Excellent! Jagane Sundar, CTO of WANdisco, talking about live data, and the journey the customers are on to make themselves more fully digital businesses. >> Thanks, Peter. >> Once again this is Peter Burris from theCUBE, CUBEConversation with Jagane Sundar of WANdisco. (intense orchestral music)

Published Date : May 17 2018

SUMMARY :

Today we've got a special guest from WANdisco, and the degree to which it's institutionalizing, to a place where you can invent with your data, So the core notion is: that every business has to start in a manner such that you can make that as they think about digital transformation, that you perform on your data will stop, so that they can be delivered to applications such that if you detect a flaw or a problem and cost realities. Those are all consilience that need to be addressed So as we think about where customers are going with this that opened their minds to a whole bunch of new ideas. that a CIO has to apply here, We take that to the next step, and the journey the customers are on to CUBEConversation with Jagane Sundar of WANdisco.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Jagane SundarPERSON

0.99+

PeterPERSON

0.99+

WANdiscoORGANIZATION

0.99+

JaganePERSON

0.99+

May 2018DATE

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

each locationQUANTITY

0.99+

WikibonORGANIZATION

0.99+

two terminologiesQUANTITY

0.99+

each instanceQUANTITY

0.99+

TodayDATE

0.98+

eachQUANTITY

0.98+

once a weekQUANTITY

0.98+

secondQUANTITY

0.97+

first thingQUANTITY

0.97+

todayDATE

0.97+

one partQUANTITY

0.97+

two very simple use-casesQUANTITY

0.96+

OneQUANTITY

0.96+

second thingQUANTITY

0.95+

East CoastLOCATION

0.95+

MidwestLOCATION

0.94+

Azure SQL DWTITLE

0.94+

about twiceQUANTITY

0.94+

one data centerQUANTITY

0.94+

SiliconANGLEORGANIZATION

0.93+

West CoastLOCATION

0.92+

one aspectQUANTITY

0.91+

CTOPERSON

0.83+

MacintoshCOMMERCIAL_ITEM

0.82+

one more thingQUANTITY

0.8+

CloudTITLE

0.73+

CUBEConversationEVENT

0.72+

PaxosORGANIZATION

0.56+

RedshiftTITLE

0.52+

theCUBEORGANIZATION

0.52+

Karl Rautenstrauch, Microsoft | VeaamOn 2018


 

>> Announcer: Live from Chicago, Illinois, it's theCUBE, covering VEEAMON 2018. Brought to you by Veeam. >> Welcome back to VEEAMON 2018 in Chicago, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante. I'm here with my cohost, Stu Miniman. Karl Rautenstrauch is here, Karl Rautenstrauch, Senior Program Manager for Azure Storage at Microsoft. Karl, thanks for coming on. >> It's a pleasure, guys. Thank you for having me. >> You've got a beautiful picture of your family. You got three boys at home, is that right? >> Karl: Three boys. >> Alright. >> They keep me out of trouble. They get into it, they keep me out of it. >> I'm one of three boys. My mom, you know, kept us going. You must have a strong woman at home. >> She is a saint. >> At any rate, thanks for coming on. We love talking Microsoft Azure, Cloud and storage. Let's start with your role. >> Karl: Sure. What do you have? >> What do you do at Microsoft? >> Absolutely. So for the last year I've been program manager with the storage team, and I've kind of a unique role. Usually you see program managers who focus on features, right? You are championing a new feature in your service, your platform. For me, I get to work with our partner ecosystem. So I spend a lot of time with our great partners, like Veeam, and our channel partners, like SHI, CDW, Softchoice, Insight. I'll tell you, I've got the best job in the business. I can't complain. I get to work with great, smart people everyday. >> So is your role transferring knowledge to those partners, assisting those partners, acting as a catalyst, gathering information from them and feeding it back to the product teams? >> Yeah, really all of the above. Helping to make sure that we've got a combined solution, an end-to-end solution, that's the best thing for our customers. So everything from upfront assessment through implementation through health check afterwards, our goal is to have the happiest customers in the public Cloud, and we can't do that without our partners. >> How should we think about the Azure Storage portfolio? Can you paint a picture for us? >> Oh boy, it has grown drastically just in the last couple of months. So not only do we have our first party offerings in the disk, traditional VM disk as we all know it, you're going to attach to a server, we have hosted file infrastructures where we provide file shares that don't require a server to manage, our partnership with NetApp where we are going to be operating NetApp systems in our data centers and offering their native services. And we just continue to expand with big data solutions, with Avere, our new acquisition, that is really aimed at high performance compute environments like we see in genomics and media and entertainment. It's just a portfolio that continues to grow. We all joke that storage is boring, right? Nobody cares about storage, but honestly, it's one of the most interesting and fastest growing and evolving platforms in Azure. >> We joke, sometimes we call it snore-age, but Stu and I are kind of boring people, so we love talking about it. >> I like that. >> So you got file, you got object, you got block, you got big data solutions, you got high performance file solutions. Okay, like you say, this expanding portfolio. >> Karl, I look back at my career and Microsoft's had a long partnership, not only on the compute side, but really on the storage side, maybe isn't as well known as shipping on every PC and server out there. Lot has changed, when you talk about Azure and Azure Stack coming out. Maybe explain a little bit, I believe you called it the first party versus the second party. How that Microsoft does it versus Microsoft partners, how those mesh together. >> Yeah, absolutely. Well I'll tell you. So I joined the company about five years ago, and I've been on the storage team for the last year. I was a field specialist, a subject matter expert, before that working very, very closely with customers. And what I love that I've seen over this period through the Satya Nadella era, is just this open Microsoft that says, we don't have to do everything. We don't have to try to provide everything to the customer. We really believe in, and I think we just diffuse that best of breed attitude going forward. Our partners feel that. Whether we're working with Veeam in Azure Public Cloud as a target, or them offering protection of VMs in public cloud, which is necessary by the way. I think that's a huge fallacy in the industry, that you place your app, you place your machine in a public cloud, and it's magically protected by pixies. It's not. >> Backup and security aren't a concern, wherever you put it, right? >> Absolutely, wherever they are. So we rely on our partners like Veeam to provide that. And really where Azure Stack comes in, is providing that consistent experience, not just to our customers, but also to out partners. So Veeam is able to protect Azure Public assets, in the same manner they're able to protect Azure Private, for Azure Stack resources. So really it's just offering customers choice to use best of breed solutions, and allowing our partners to have an easy means to support both on-premises and public Cloud. >> So it's like a service catalog that you guys offer, and then you advise customers or they pick and choose what they want? How's that all work? >> Yeah, so really what we do, and that's a great way to put it. We have what we call the Azure Marketplace that's present in the Azure Public Cloud, and we extend that to Azure Stack. So if I'm a customer who wants to deploy Veeam, per se, in either infrastructure, I go to this catalog of apps. I mean it literally is a catalog of apps. Search for Veeam, there it is, and I can single click deploy in either Azure Stack or Azure Public. >> Microsoft is unique in the sense of its hybrid strategy, in terms of what you have in the cloud you have on-prem. You're trying to, wherever possible, make it identical. >> Karl: Absolutely. >> Microsoft and Oracle are really the only two companies that have a stated strategy to do that. Let's talk about Microsoft in terms of where you're at, in terms of getting that substantially similar capability in on-prem and in the public Cloud. >> Yeah, absolutely. That's a great, great topic to discuss. Azure Stack, I always like to tell folks, full disclosure, and we don't try to hide this at all, that's not who we are, but it will always lag a little bit behind Azure Public. When you think about the controls in customers' data centers for rolling out code updates and new versions of software, new capabilities, there's always an adoption curve. You have folks who are a little more hesitant to release quickly and adopt quickly. So Azure Stack offers them the capability to defer some of those updates for a period of time. So there will be a lag. We have to qualify for multiple vendor platforms, we've chosen to go to market in a hyperconverged model with our partners, like Dell EMC, HP, Lenovo and Cisco. Whereas Azure Public, that's a completely controlled infrastructure, and we're able to deploy very quickly. And we do; we're constantly iterating and releasing new features. So I think that's the biggest difference between the two. >> So Karl, you give a session here at the show called Migrating to Azure. That whole move is pretty challenging. >> Karl: Oh yes. Am I lift and shifting? Am I transforming? Am I building new? What are you hearing from customers? And give our audience a taste of some of the key takeaways that you were talking about. >> Yeah, absolutely. So that's one of the biggest concerns that we've had over the last couple of years. I said earlier, we want the happiest customers in Public Cloud, and no Cloud regret or remorse. So what we talked about in our session was a tool that we released recently called Azure Migrate, that is all about assessing and setting expectations for customers around what can and cannot migrate, how much it will cost to run that infrastructure in Public Cloud, either as is or optimized, and then suggestions for optimizing their infrastructure to get the best bang for their buck. So there are great opportunities to save cost when platforms are adopted, like Azure sql, platform as a service offerings. When I've got that time-sharing concept, when I take away maintenance activities around operating systems and software releases, there are significant cost savings versus a lift and shift, which can quite honestly be more expensive than what that customer is doing on-premises today. So Azure Migrate is meant to help customers avoid that, no regrets. >> I wonder what you're hearing from customers cuz there's some concern. Maybe I should just do infrastructure as a service. Cuz if I get into those platform as a service, am I locked in? Microsoft is used for lots of business scribble applications. I see Microsoft strongly in the Kubernetes ecosystem, getting into the functions as a service, which those things are trying to give me a little bit more portability and flexibility. Maybe discuss some of that. >> Yeah, that's great, and I'm glad you brought that back around. So there is always that concern about the Cloud Hotel California, right? And that said, I like to half jokingly refer to it as you get in, you can never leave. And there is that jeopardy with any provider. That if you're using some proprietary platform that you can be locked-in, and really we try to promote the use of containers extensively with those customers who have that concern. And even with our hosted analytics and hosted database infrastructures, we make sure to provide those portable cross-Cloud platforms, like Postgres, MySQL. Our analytics is all Ubuntu based. Really we don't want that lock-in to be there, we don't want that to be a concern. So continuing support for open platforms and ecosystems is really something we're committed to. >> The lock-in, openness choice, it's a spectrum. I've been in this business for a long time, and Unix used to be the open system. And then today, you can't get more locked-in then a Unix platform. So I feel as though, and I wonder if you guys can comment, the Cloud has transparent pricing and transparent billing. And so lock-in is if I have a customer and they're trying to move and they're up for a contract renewal or something or a maintenance, I'm going to jack their maintenance. But you can't just do that across the board, if you have transparent billing. So there's the pricing aspect. There's certainly a lock-in with the processes and procedures that you choose, but no matter what you choose, whether it's open source, a Cloud provider like Amazon, an on-prem provider like the many that we know out there, you're going to be locked-in to your processes and procedures. So it's a matter of degree. I personally see it, because of the Cloud, as a lot less onerous than it used to be. Do you guys agree with that? >> I mean Dave, it's that application is the long pole in the tent for ones I see. What I've been using and if I go to something new, if I go build this new architecture, Cloud Nader or whatever, that's a pretty big bet. So depending on how deep and tied that is to a specific platform, even if I'm choosing a database, migrating databases aren't easy. >> But that's the issue. It's the bet that you're making. It's more so than the lock-in because lock-in, you're going to be locked in to whatever bet you make, so you've got to make the right bet. To me, it's a way for consultants to act like an advocate for the customer. What's more important in my view, is negotiation strategies, how you place that bet, how you architect your Cloud strategy. >> And I mean Dave, just quickly, I remember four years ago you and I interviewed Brad Anderson with Microsoft, and we were poking him on licensing. I don't hear that discussion about Microsoft as much, of course we always want it cheaper, and everything like that, but Microsoft's done a great job. In the Cloud communities, they're known as participating in those communities, and giving customers- >> Well that's our take, what's your take? >> No, I love it. And I think what I'm seeing is customers are hedging their bets. So you do, and it is a bet. You do have to not go all in with somebody, with any Cloud provider, but you got to put your chips with some proprietary platforms. And what I'm seeing is that multi-Cloud that we're all talking about is really becoming the reality. I can think of very few customers that I've worked with who have had Azure as their single public Cloud. And really that's how they avoid that z-series down the road, right? Where you're locked-in, you got one provider that platform. They're saying, look, I'm going to deploy on the best service in the best public Cloud for that application instance, as Stu mentioned. That's happening. >> Horses for courses, as they say in England. >> Karl: There you go. >> So we're here at VEEAMON. Your relationship with Veeam, they've obviously partnered up with you guys in a big way. Your thoughts on the partnership? >> Yeah, love working with these guys. I'm very fortunate in that I get to work with some of the best that we have, and everything from the relationship that we have on a marketing level, an engineering level, a field level, they're really ingrained in our ecosystem at all levels. Just a very, very easy partner to work with, very responsive to their customer needs. And that's what we look for. We want to work with the partners that customers love. So I'm just thrilled to be part of this relationship. >> Karl, thanks so much for coming on theCUBE. I think you embody the new open Microsoft, and you guys are making great progress. Congratulations and thanks so much for coming on. >> Thank you Dave, it was a pleasure. Stu, thank you very much. >> Alright, keep it right there everybody. We'll be back with our next guest. VEEAMON live from Chicago, you're watching theCUBE. (upbeat music)

Published Date : May 16 2018

SUMMARY :

Brought to you by Veeam. the leader in live tech coverage. Thank you for having me. picture of your family. They get into it, they keep me out of it. My mom, you know, kept us going. Azure, Cloud and storage. What do you have? So for the last year I've been and we can't do that without our partners. that continues to grow. so we love talking about it. So you got file, I believe you called it the first party and I've been on the storage and allowing our partners to have and we extend that to Azure Stack. the cloud you have on-prem. and in the public Cloud. I always like to tell folks, So Karl, you give a that you were talking about. So that's one of the biggest concerns getting into the functions as a service, and I'm glad you brought that back around. and I wonder if you guys can comment, it's that application is the long pole in to whatever bet you make, I remember four years ago you and I So you do, and it is a bet. as they say in England. up with you guys in a big way. and everything from the relationship and you guys are making great progress. Thank you Dave, it was a pleasure. We'll be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

EnglandLOCATION

0.99+

LenovoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Karl RautenstrauchPERSON

0.99+

KarlPERSON

0.99+

Stu MinimanPERSON

0.99+

HPORGANIZATION

0.99+

Brad AndersonPERSON

0.99+

DavePERSON

0.99+

ChicagoLOCATION

0.99+

AmazonORGANIZATION

0.99+

twoQUANTITY

0.99+

three boysQUANTITY

0.99+

Three boysQUANTITY

0.99+

VEEAMONORGANIZATION

0.99+

Azure StackTITLE

0.99+

Azure MigrateTITLE

0.99+

two companiesQUANTITY

0.99+

last yearDATE

0.99+

todayDATE

0.99+

VeeamORGANIZATION

0.99+

Azure PublicTITLE

0.99+

Satya NadellaPERSON

0.99+

first partyQUANTITY

0.99+

Chicago, IllinoisLOCATION

0.99+

second partyQUANTITY

0.99+

CDWORGANIZATION

0.99+

SoftchoiceORGANIZATION

0.99+

Azure sqlTITLE

0.99+

Cloud Hotel CaliforniaORGANIZATION

0.99+

StuPERSON

0.98+

InsightORGANIZATION

0.98+

oneQUANTITY

0.98+

VeeamTITLE

0.98+

MySQLTITLE

0.98+

four years agoDATE

0.98+

AzureTITLE

0.98+

Azure Public CloudTITLE

0.97+

SHIORGANIZATION

0.96+

2018DATE

0.95+

one providerQUANTITY

0.95+

VeeamPERSON

0.94+

bothQUANTITY

0.94+

Dell EMCORGANIZATION

0.93+

AvereORGANIZATION

0.93+

UbuntuTITLE

0.93+

Public CloudTITLE

0.92+