Image Title

Search Results for smp:

Rainer Richter, Horizon3.ai | Horizon3.ai Partner Program Expands Internationally


 

(light music) >> Hello, and welcome to theCUBE's special presentation with Horizon3.ai with Rainer Richter, Vice President of EMEA, Europe, Middle East and Africa, and Asia Pacific, APAC Horizon3.ai. Welcome to this special CUBE presentation. Thanks for joining us. >> Thank you for the invitation. >> So Horizon3.ai, driving global expansion, big international news with a partner-first approach. You guys are expanding internationally. Let's get into it. You guys are driving this new expanse partner program to new heights. Tell us about it. What are you seeing in the momentum? Why the expansion? What's all the news about? >> Well, I would say in international, we have, I would say a similar situation like in the US. There is a global shortage of well-educated penetration testers on the one hand side. On the other side, we have a raising demand of network and infrastructure security. And with our approach of an autonomous penetration testing, I believe we are totally on top of the game, especially as we have also now starting with an international instance. That means for example, if a customer in Europe is using our service, NodeZero, he will be connected to a NodeZero instance, which is located inside the European Union. And therefore, he doesn't have to worry about the conflict between the European GDPR regulations versus the US CLOUD Act. And I would say there, we have a total good package for our partners that they can provide differentiators to their customers. >> You know, we've had great conversations here on theCUBE with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company. And obviously, I can just connect the dots here, but I'd like you to weigh in more on how that translates into the go-to-market here because you got great cloud scale with the security product you guys are having success with. Great leverage there, I'm seeing a lot of success there. What's the momentum on the channel partner program internationally? Why is it so important to you? Is it just the regional segmentation? Is it the economics? Why the momentum? >> Well, there are multiple issues. First of all, there is a raising demand in penetration testing. And don't forget that in international, we have a much higher level number or percentage in SMB and mid-market customers. So these customers, typically, most of them even didn't have a pen test done once a year. So for them, pen testing was just too expensive. Now with our offering together with our partners, we can provide different ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with a traditional manual pen test, and that is because we have our Consulting PLUS package, which is for typically pen testers. They can go out and can do a much faster, much quicker pen test at many customers after each other. So they can do more pen test on a lower, more attractive price. On the other side, there are others or even the same one who are providing NodeZero as an MSSP service. So they can go after SMP customers saying, "Okay, you only have a couple of hundred IP addresses. No worries, we have the perfect package for you." And then you have, let's say the mid-market. Let's say the thousand and more employees, then they might even have an annual subscription. Very traditional, but for all of them, it's all the same. The customer or the service provider doesn't need a piece of hardware. They only need to install a small piece of a Docker container and that's it. And that makes it so smooth to go in and say, "Okay, Mr. Customer, we just put in this virtual attacker into your network, and that's it and all the rest is done." And within three clicks, they can act like a pen tester with 20 years of experience. >> And that's going to be very channel-friendly and partner-friendly, I can almost imagine. So I have to ask you, and thank you for calling out that breakdown and segmentation. That was good, that was very helpful for me to understand, but I want to follow up, if you don't mind. What type of partners are you seeing the most traction with and why? >> Well, I would say at the beginning, typically, you have the innovators, the early adapters, typically boutique-size of partners. They start because they are always looking for innovation. Those are the ones, they start in the beginning. So we have a wide range of partners having mostly even managed by the owner of the company. So they immediately understand, okay, there is the value, and they can change their offering. They're changing their offering in terms of penetration testing because they can do more pen tests and they can then add others ones. Or we have those ones who offered pen test services, but they did not have their own pen testers. So they had to go out on the open market and source pen testing experts to get the pen test at a particular customer done. And now with NodeZero, they're totally independent. They can go out and say, "Okay, Mr. Customer, here's the service. That's it, we turn it on. And within an hour, you are up and running totally." >> Yeah, and those pen tests are usually expensive and hard to do. Now it's right in line with the sales delivery. Pretty interesting for a partner. >> Absolutely, but on the other hand side, we are not killing the pen tester's business. We are providing with NodeZero, I would call something like the foundational work. The foundational work of having an ongoing penetration testing of the infrastructure, the operating system. And the pen testers by themselves, they can concentrate in the future on things like application pen testing, for example. So those services, which we are not touching. So we are not killing the pen tester market. We are just taking away the ongoing, let's say foundation work, call it that way. >> Yeah, yeah. That was one of my questions. I was going to ask is there's a lot of interest in this autonomous pen testing. One because it's expensive to do because those skills are required are in need and they're expensive. (chuckles) So you kind of cover the entry-level and the blockers that are in there. I've seen people say to me, "This pen test becomes a blocker for getting things done." So there's been a lot of interest in the autonomous pen testing and for organizations to have that posture. And it's an overseas issue too because now you have that ongoing thing. So can you explain that particular benefit for an organization to have that continuously verifying an organization's posture? >> Certainly. So I would say typically, you have to do your patches. You have to bring in new versions of operating systems, of different services, of operating systems of some components, and they are always bringing new vulnerabilities. The difference here is that with NodeZero, we are telling the customer or the partner the package. We're telling them which are the executable vulnerabilities because previously, they might have had a vulnerability scanner. So this vulnerability scanner brought up hundreds or even thousands of CVEs, but didn't say anything about which of them are vulnerable, really executable. And then you need an expert digging in one CVE after the other, finding out is it really executable, yes or no? And that is where you need highly-paid experts, which where we have a shortage. So with NodeZero now, we can say, "Okay, we tell you exactly which ones are the ones you should work on because those are the ones which are executable. We rank them accordingly to risk level, how easily they can be used." And then the good thing is converted or in difference to the traditional penetration test, they don't have to wait for a year for the next pen test to find out if the fixing was effective. They run just the next scan and say, "Yes, closed. Vulnerability is gone." >> The time is really valuable. And if you're doing any DevOps, cloud-native, you're always pushing new things. So pen test, ongoing pen testing is actually a benefit just in general as a kind of hygiene. So really, really interesting solution. Really bringing that global scale is going to be a new coverage area for us, for sure. I have to ask you, if you don't mind answering, what particular region are you focused on or plan to target for this next phase of growth? >> Well, at this moment, we are concentrating on the countries inside the European Union plus United Kingdom. And of course, logically, I'm based in the Frankfurt area. That means we cover more or less the countries just around. So it's like the so-called DACH region, Germany, Switzerland, Austria, plus the Netherlands. But we also already have partners in the Nordic, like in Finland and Sweden. So we have partners already in the UK and it's rapidly growing. So for example, we are now starting with some activities in Singapore and also in the Middle East area. Very important, depending on let's say, the way how to do business. Currently, we try to concentrate on those countries where we can have, let's say at least English as an accepted business language. >> Great, is there any particular region you're having the most success with right now? Sounds like European Union's kind of first wave. What's the most- >> Yes, that's the first. Definitely, that's the first wave. And now with also getting the European INSTANCE up and running, it's clearly our commitment also to the market saying, "Okay, we know there are certain dedicated requirements and we take care of this." And we are just launching, we are building up this one, the instance in the AWS service center here in Frankfurt. Also, with some dedicated hardware, internet, and a data center in Frankfurt, where we have with the DE-CIX, by the way, the highest internet interconnection bandwidth on the planet. So we have very short latency to wherever you are on the globe. >> That's a great call out benefit too. I was going to ask that. What are some of the benefits your partners are seeing in EMEA and Asia Pacific? >> Well, I would say, the benefits for them, it's clearly they can talk with customers and can offer customers penetration testing, which they before even didn't think about because penetration testing in a traditional way was simply too expensive for them, too complex, the preparation time was too long, they didn't have even have the capacity to support an external pen tester. Now with this service, you can go in and even say, "Mr. Customer, we can do a test with you in a couple of minutes. We have installed a Docker container. Within 10 minutes, we have the pen test started. That's it and then we just wait." And I would say we are seeing so many aha moments then. On the partner side, when they see NodeZero the first time working, it's like they say, "Wow, that is great." And then they walk out to customers and show it to their typically at the beginning, mostly the friendly customers like, "Wow, that's great, I need that." And I would say the feedback from the partners is that is a service where I do not have to evangelize the customer. Everybody understands penetration testing, I don't have to describe what it is. The customer understanding immediately, "Yes. Penetration testing, heard about that. I know I should do it, but too complex, too expensive." Now for example, as an MSSP service provided from one of our partners, it's getting easy. >> Yeah, and it's great benefit there. I mean, I got to say I'm a huge fan of what you guys are doing. I like this continuous automation. That's a major benefit to anyone doing DevOps or any kind of modern application development. This is just a godsend for them, this is really good. And like you said, the pen testers that are doing it, they were kind of coming down from their expertise to kind of do things that should have been automated. They get to focus on the bigger ticket items. That's a really big point. >> Exactly. So we free them, we free the pen testers for the higher level elements of the penetration testing segment, and that is typically the application testing, which is currently far away from being automated. >> Yeah, and that's where the most critical workloads are, and I think this is the nice balance. Congratulations on the international expansion of the program, and thanks for coming on this special presentation. I really appreciate it. Thank you very much. >> You're welcome. >> Okay, this is theCUBE special presentation, you know, checking on pen test automation, international expansion, Horizon3.ai. A really innovative solution. In our next segment, Chris Hill, Sector Head for Strategic Accounts, will discuss the power of Horizon3.ai and Splunk in action. You're watching theCUBE, the leader in high tech enterprise coverage. (steady music)

Published Date : Sep 27 2022

SUMMARY :

Welcome to this special CUBE presentation. Why the expansion? On the other side, on the channel partner and that's it and all the rest is done." seeing the most traction with Those are the ones, they and hard to do. And the pen testers by themselves, and the blockers that are in there. in one CVE after the other, I have to ask you, if and also in the Middle East area. What's the most- Definitely, that's the first wave. What are some of the benefits "Mr. Customer, we can do a test with you the bigger ticket items. of the penetration testing segment, of the program, the leader in high tech

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

Chris HillPERSON

0.99+

FinlandLOCATION

0.99+

SwedenLOCATION

0.99+

SingaporeLOCATION

0.99+

AWSORGANIZATION

0.99+

UKLOCATION

0.99+

FrankfurtLOCATION

0.99+

hundredsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

APACORGANIZATION

0.99+

Rainer RichterPERSON

0.99+

Asia PacificLOCATION

0.99+

NetherlandsLOCATION

0.99+

NordicLOCATION

0.99+

US CLOUD ActTITLE

0.99+

Middle EastLOCATION

0.99+

EMEALOCATION

0.99+

SwitzerlandLOCATION

0.99+

USLOCATION

0.99+

AustriaLOCATION

0.99+

thousandsQUANTITY

0.99+

European UnionORGANIZATION

0.99+

United KingdomLOCATION

0.99+

three clicksQUANTITY

0.99+

once a yearQUANTITY

0.99+

GermanyLOCATION

0.99+

firstQUANTITY

0.99+

more than once a yearQUANTITY

0.98+

10 minutesQUANTITY

0.98+

NodeZeroORGANIZATION

0.98+

CUBEORGANIZATION

0.97+

EnglishOTHER

0.97+

Horizon3.aiTITLE

0.96+

FirstQUANTITY

0.96+

first timeQUANTITY

0.95+

OneQUANTITY

0.95+

European UnionLOCATION

0.94+

CVEsQUANTITY

0.94+

EMEAORGANIZATION

0.93+

DACH regionLOCATION

0.93+

a yearQUANTITY

0.92+

oneQUANTITY

0.92+

Vice PresidentPERSON

0.9+

first waveEVENT

0.89+

an hourQUANTITY

0.85+

DE-CIXOTHER

0.83+

one of my questionsQUANTITY

0.82+

EuropeanOTHER

0.82+

first approachQUANTITY

0.82+

NodeZeroCOMMERCIAL_ITEM

0.79+

theCUBEORGANIZATION

0.79+

hundred IP addressesQUANTITY

0.73+

thousand and more employeesQUANTITY

0.7+

UnionLOCATION

0.69+

AsiaORGANIZATION

0.67+

GDPRTITLE

0.63+

Horizon3.aiORGANIZATION

0.58+

SMPORGANIZATION

0.55+

NodeZeroTITLE

0.55+

coupleQUANTITY

0.53+

MiddleLOCATION

0.52+

EastORGANIZATION

0.52+

PacificLOCATION

0.51+

EuropeanORGANIZATION

0.51+

AfricaLOCATION

0.45+

minutesQUANTITY

0.38+

Partner ProgramOTHER

0.32+

Vittorio Viarengo, VP of Cross Cloud Services, VMware | VMware Explore 2022


 

(gentle music intro) >> Okay, we're back. We're live here at theCUBE and at VMworld, VMware Explore, formally VMworld. I'm John Furrier with Dave Vellante. Three days of wall to wall coverage, we've got Vittorio Viarengo, the vice president of Cross-Cloud Services at VMware. Vittorio, great to see you, and thanks for coming on theCUBE right after your keynote. I can't get that off my tongue, VMworld. 12 years of CUBE coverage. This is the first year of VMware Explore, formerly VMworld. Raghu said in his keynote, he explained the VMworld community now with multi-clouds that you're in charge of at VMworld, VMware, is now the Explore brand's going to explore the multi-cloud, that's a big part of Raghu's vision and VMware. You're driving it and you are on the stage just now. What's, what's going on? >> Yeah, what I said at my keynote note is that our customers have been the explorer of IT, new IT frontier, always challenging the status quo. And we've been, our legendary engineering team, been behind the scenes, providing them with the tools of the technology to be successful in that journey to the private cloud. And Kelsey said it. What we built was the foundation for the cloud. And now it's time to start a new journey in the multi-cloud. >> Now, one of the things that we heard today clearly was: multi-cloud's a reality. Cloud chaos, Kit Colbert was talking about that and we've been saying, you know, people are chaotic. We believe that. Andy Grove once said, "Reign in the chaos. Let chaos reign, then reign in the chaos." That's the opportunity. The complexity of cross-cloud is being solved. You guys have a vision, take us through how you see that happening. A lot of people want to see this cross-cloud abstraction happen. What's the story from your standpoint, how you see that evolving? >> I think that IT history repeats itself, right? Every starts nice and neat. "Oh, I'm going to buy a bunch of HP servers and my life is going to be good, and oh, this store." >> Spin up an EC2. >> Yeah. Eventually everything goes like this in IT because every vendor do what they do, they innovate. And so that could create complexity. And in the cloud is the complexity on steroid because you have six major cloud, all the local clouds, the cloud pro- local cloud providers, and each of these cloud brings their own way of doing management security. And I think now it's time. Every customer that I talk to, they want more simplicity. You know, how do I go fast but be able to manage the complexity? So that's where cross-cloud services- Last year, we launched a vision, with a sprinkle of software behind it, of building a set of cloud-native services that allow our customers to build, run, manage, secure, and access any application consistently across any cloud. >> Yeah, so you're a year in now, it's not like, I mean, you know, when you come together in a physical event like this, it resonates more, you got the attention. When you're watching the virtual events, you get doing a lot of different things. So it's not like you just stumbled upon this last week. Okay, so what have you learned in the last year in terms of post that launch. >> What we learned is what we have been building for the last five years, right? Because we started, we saw multi-cloud happening before anybody else, I would argue. With our announcement with AWS five, six years ago, right? And then our first journey to multi-cloud was let's bring vSphere on all the clouds. And that's a great purpose to help our customers accelerate their journey of their "legacy" application. Their application actually deliver business to the cloud. But then around two, three years ago, I think Raghu realized that to add value, we needed- customers were already in the cloud, we needed to embrace the native cloud. And that's where Tanzu came in as a way to build application. Tanzu manage, way to secure manage application. And now with Aria, we now have more differentiated software to actually manage this application across- >> Yeah, and Aria is the management plane. That's the rebrand. It's not a new product per se. It's a collection of the VMware stuff, right? Isn't it like- >> No, it's, it's a... >> It's a new product? >> There is a new innovation there because basically they, the engineering team built this graph and Raghu compared it to the graph that Google builds up around about the web. So we go out and crawl all your assets across any cloud and we'll build you this model that now allows you to see what are your assets, how you can manage them, what are the performance and all that, so. No, it's more than a brand. It's, it's a new innovation and integration of a technology that we had. >> And that's a critical component of cross-cloud. So I want to get back to what you said about Raghu and what he's been focused on. You know, I remember interviewing him in 2016 with Andy Jassy at AWS, and that helped clear up the cloud game. But even before that Raghu and I had talked, Dave, on theCUBE, I think it was like 2014? >> Yeah. >> Pat Gelson was just getting on board as the CEO of VMware. Hybrid was very much on the conversation then. Even then it was early. Hybrid was early, you guys are seeing multi-cloud early. >> It was private cloud. >> Totally give you props on that. So VMware gets total props on that, being right on that. Where are we in that journey? 'Cause super cloud, as we're talking about, you were contributing to that initiative in the open with our open source project. What is multi-cloud? Where is it in the status of the customer? I think everyone will agree, multi-cloud is an outcome that's going to happen. It's happening. Everyone has multiple clouds and they configure things differently. Where are we on the progress bar in your mind? >> I think I want to answer that question and go back to your question, which I didn't address, you know, what we are learning from customers. I think that most customers are at the very, very beginning. They're either in the denial stage, like yesterday talked to a customer, I said, "Are you multi-cloud, are you on your multi-cloud journey?" And he said, "Oh we are on-prem and a little bit of Azure." I said, "Oh really? So the bus- "Oh no, well the business unit is using AWS, right? And we are required company that is using-" I said, "Okay, so you are... that customer is in cloud first stage." >> Like you said, we've seen this movie before. It comes around, right? >> Yeah. >> Somebody's going to have to clean that up at some point. >> Yeah, I think a lot, a lot of- the majority customers are either in denial or in the cloud chaos. And some customers are pushing the envelope like SMP. SMP Global, we heard this morning. Somebody has done all the journey in the private cloud with us, and now I said, and I talked to him a few months ago, he told me, "I had to get in front of my developers. Enough of this, you know, wild west. I had to lay down the tracks and galleries for them to build multi-cloud in a way that was, give them choice, but for me, as an operator and a security person, being able to manage it and secure it." And so I think most customers are in that chaos phase right now. Very early. >> So at our Supercloud22 event, we were riffing and I was asking you about, are you going to hide the complexity, yes. But you're also going to give access to the, to the developers if they want access to the primitives. And I said to you, "It sounds like you want to have your cake and eat it too." And you said, "And want to lose weight." And I never followed up with you, so I want to follow up now. By "lose weight," I presume you mean be essentially that platform of choice, right? So you're going to, you're going to simplify, but you're going to give access to the developers for those primitives, if in fact they want one. And you're going to be the super cloud, my word of choice. So my question to you is why, first of all, is that correct, your "lose weight"? And why VMware? >> When I say you, you want a cake, eat it and lose weight, I, and I'm going to sound a little arrogant, it's hard to be humble when you're good. But now I work for a company, I work for a company that does that. Has done it over and over and over again. We have done stuff, I... Sometimes when I go before customers, I say, "And our technology does this." Then the customer gets on stage and I go, "Oh my God, oh my God." And then the customers say, "Yeah, plus I realize that I could also do this." So that's, you know, that's the kind of company that we are. And I think that we were so busy being successful with on-prem and that, you know, that we kind of... the cloud happened. Under our eyes. But now with the multi-cloud, I think there is opportunity for VMware to do it all over again. And we are the right company to do it for two reasons. One, we have the right DNA. We have those engineers that know how to make stuff that was not designed to work together work together and the right partnership because everybody partners with us. >> But, you know, a lot of companies like, oh, they missed cloud, they missed mobile. They missed that, whatever it was. VMware was very much aware of this. You made an effort to do kind of your own cloud initiative, backed off from- and everybody was like, this is a disaster waiting to happen and of course it was. And so then you realize that, you learn from your mistakes, and then you embraced the AWS deal. And that changed everything, it changed... It cleared it up for your customers. I'm not hearing anybody saying that the cross-cloud services strategy, what we call multi, uh, super cloud is wrong. Nobody's saying that's like a failed, you know, strategy. Now the execution obviously is very important. So that's why I'm saying it's different this time around. It's not like you don't have your pulse on it. I mean, you tried before, okay, the strategy wasn't right, it backfired, okay, and then you embraced it. But now people are generally in agreement that there's either a problem or there's going to be a problem. And so you kind of just addressed why VMware, because you've always been in the catbird seat to solve those problems. >> But it is a testament to the pragmatism of the company. Right? You try- In technology, you cannot always get it right, right? When you don't get it right, say, "Okay, that didn't work. What is the next?" And I think now we're onto something. It's a very ambitious vision for sure. But I think if you look at the companies out there that have the muscles and the DNA and the resources to do it, I think VMware is one. >> One of the risks to the success, what's been, you know you watch the Twitter chatter is, "Oh, can VMware actually attract the developers?" John chimed in and said, >> Yeah. >> It's not just the devs. I mean, just devs. But also when you think of DevOps, the ops, right? When you think about securing and having that consistent platform. So when you think about the critical factors for you to execute, you have to have that pass platform, no question. Well, how do you think about, okay, where are the gaps that we really have to get right? >> I think that for us to go and get the developers on board, it's too late. And it's too late for most companies. Developers go with the open source, they go with the path of least resistance. So our way into that, and as Kelsey Hightower said, building new application, more applications, is a team sport. And part of that team is the Ops team. And there we have an entry, I think. Because that's what- >> I think it possible. I think you, I think you're hitting it. And my dev comment, by the way, I've been kind of snarky on Twitter about this, but I say, "Oh, Dev's got it easy. They're sitting in the beach with sunglasses on, you know, having focaccia. >> Doing whatever they want. >> Happy doing whatever they want. No, it's better life for the developer now. Open source is the software industry, that's going great. Shift left in CI/CD pipeline. Developers are faster than ever, they're innovating. It's all self-service, it's all DevOps. It's looking good for the developers right now. And that's why everyone's focused on that. They're driving the change. The Ops team, that was traditional IT Ops, is now DevOps with developers. So the seed change of data and security, which is core, we're hearing a lot of those. And if you look at all the big successes, Snowflake, Databricks, MinIO, who was on earlier with the S3 cloud storage anywhere, this is the new connective tissue that VMware can connect to and extend the operational platform of IT and connect developers. You don't need to win them all over. You just connect to them. >> You just have to embrace the tools that they're using. >> Exactly. >> You just got to connect to them. >> You know, you bring up an interesting point. Snowflake has to win the developers, 'cause they're basically saying, "Hey, we're building an application development platform on top of our proprietary system." You're not saying that. You're saying we're embracing the open source tools that developers are using, so use them. >> Well, we gave it a single pane of glass to manage your application everywhere. And going back to your point about not hiding the underlying primitives, we manage that application, right? That application could be moving around, but nobody prevents that application to use that API underneath. I mean, that's, that can always do that. >> Right, right. >> And, and one of the reason why we had Kelsey Hightower and my keynote and the main keynote was that I think he shows that the template, the blueprint for our customers, our operators, if you want to have- even propel your career forward, look at what he did, right? VI admin, going up the stack storage and everything else, and then eventually embrace Kubernetes, became an expert. Really took the time to understand how modern application were- are built. And now he's a luminary in the industry. So we don't have, all have to become luminary, but you can- our customers right here, doing the labs upstairs, they can propel the career forward in this. >> So summarize what you guys are announcing around cross cloud-services. Obviously Aria, another version, 1.3 of Tanzu. Lay out the sort of news. >> Yeah, so we- With Tanzu, we have one step forward with our developer experience so that, speaking of meeting where they are, with application templates, with ability to plug into their idea of choice. So a lot of innovation there. Then on the IR side, I think that's the name of the game in multi-cloud, is having that object model allows you to manage anything across anything. And then, we talk about cross-cloud services being a vision last year, I, when I launched it, I thought security and networking up there as a cloud, but it was still down here as ploy technology. And now with NSX, the latest version, we brought that control plane in the cloud as a cloud native cross-cloud service. So, lot of meat around the three pillars, development, the management, and security. >> And then the complementary component of vSphere 8 and vSAN 8 and the whole DPU thing, 'cause that's, 'cause that's cloud, right? I mean, we saw what AWS did with Nitro. >> Yeah. >> Five, seven years ago. >> That's the consumption model cloud. >> That's the future of computing architecture. >> And the licensing model underneath. >> Oh yeah, explain that. Right, the universal licensing model. >> Yeah, so basically what we did when we launch cloud universal was, okay, you can buy our software using credit that you have on AWS. And I said, okay, that's kind of hybrid cloud, it's not multi-cloud, right? But then we brought in Google and now the latest was Microsoft. Now you can buy our software for credits and investment that our customers already have with these great partners of ours and use it to consume as a subscription. >> So that kind of changes your go-to-market and you're not just chasing an ELA renewal now. You're sort of thinking, you're probably talking to different people within the organizations as well, right? So if I can use credits for whatever, Google, for Azure, for on-prem, for AWS, right? Those are different factions necessarily in the organization. >> So not just the technology's multi-cloud, but also the consumption model is truly multi-cloud. >> Okay, Vittorio, what's next? What's the game plan? What do you have going on? It's getting good traction here again, like Dave said, no one's poo-pooing cross-cloud services. It is kind of a timing market forces. We were just talking before you came on. Oh, customers don't- may not think they have a problem, whether they're the frog boiling water or not, they will have the problem coming up or they don't think they have a problem, but they have chaos reigning. So what's next? What are you doing? Is it going to be new tech, new market? What is the plan? >> So I think for, if I take my bombastic kind of marketing side of me hat off and I look at the technology, I think the customers in these scales wants to be told what to do. And so I think what we need to do going forward is articulate these cross-cloud services use cases. Like okay, what does mean to have an application that uses a service over here, a service over there, and then show the value of getting this component from one company? Because cross-cloud services at your event, how many vendors were there? 20? 30? >> Yeah. >> So the market is there. I mean, these are all revenue-generating companies, right, but they provide a piece of the puzzle. Our ambition is to provide a platform approach. And so we need to articulate better, what are the advantages of getting these components management, security, from- >> And Kit, Kit was saying, it's a hybrid kind of scenario. I was kind of saying, oh, putting my little business school scenario hat on, oh yeah, you go hardcore competitive, best product wins, kill or be killed, compete and win. Or you go open and you create a keiretsu, create a consortium, and get support, standardize or defacto standardize a bunch of it, and then let everyone monetize or participate. >> Yeah, we cannot do it alone. >> What's the approach? What's the approach you guys want to take? >> So I think whatever possible, first of all, we're not going to do it alone. Right, so the ecosystem is going to play a part and if the ecosystem can come together around the consortium or a standard that makes sense for customers? Absolutely. >> Well, and you say, nobody's poo-pooing it, and I stand by that. But they are saying, and I think it is true, it's hard, right? It's a very challenging, ambitious goal that you have. But yeah, you've got a track record of- >> I mean the old playbook, >> Exactly! >> The old playbooks are out. I mean, I always say, the old kill and be highly competitive strategy. Proprietary is dead. And then if you look at the old way of winning was, okay, you know, we're going to lock customers in- >> What do you mean proprietary is dead? Proprietary's not dead. >> No, I mean like, I'm talking- Okay, I'm talking about how people sell. Enterprise companies love to create, simplify, create value with chaos like okay, complexity with more complexity. So that's over, you think that's how people are marketing? >> No, no, it's true. But I mean, we see a lot of proprietary out there. >> Like what? >> It's still happening. Snowflake. (laughing) >> Tell that to the entire open store software industry. >> Right, well, but that's not your play. I mean, you have to have some kind of proprietary advantage. >> The enterprise playbook used to be solve complexity with complexity, lock the customers in. Cloud changed all that with open. You're a seasoned marketer, you're also an executive. You have an interesting new wave. How do you market to the enterprise in this new open way? How do you win? >> For us, I think we have that relationship with the C-level and we have delivered for them over and over again. So our challenge from a marketing perspective is to educate these executives about all that. And the fact that we didn't have this user conference in person didn't help, right? And then show that value to the operator so that they can help us just like we did in the past. I mean, our sales motion in the past was we made these people- I told them today, you were the heroes. When you virtualized, when you brought down 1000 servers to 80, you were the hero, right? So we need to empower them with the technology and the know-how to be heroes again in multi-cloud. And I think the business will take care of itself. >> Okay final question from me, and Dave might have another one of his, everybody wanted to know this year at VMworld, VMware Explore, which is the new name, what would it look like? What would the vibe be? Would people show up? Would it be vibrant? Would cross-cloud hunt? Would super cloud be relevant? I got to say looking at the floor last night, looking at the keynotes, looking at the perspective, it seems to look like, oh, people are on board. What is your take on this? You've been talking to customers, you're talking to people in the hallways. You've been brief talking to all the analysts. What is the vibe about this year's Explore? >> I think, you've been covering us for a long time, this is a religious following we have. And we don't take it for granted. I told the audience today, this to us is a family reunion and we couldn't be, so we got a sense of like, that's what I feels like the family is back together. >> And there's a wave coming too. It's not like business is dying. It's like a whole 'nother. Another wave is coming. >> It's funny you mention about the heroes. 'Cause I go back, I don't really have my last question, but it's just the last thought is, I remember the first time I saw a demo of VMware and I went, "Holy crap, wow. This is totally game changing." I was blown away. Right, like you said, 80 servers down to just a couple of handfuls. This is going to change everything. And that's where it all started. You know, I mean, I know it started in workstations, but that's when it really became transformational. >> Yeah, so I think we have an opportunity to do it over again with the family that is here today, of which you guys consider family as well. >> All right, favorite part of the keynote and then we'll wrap up. What was your favorite part of the keynote today? >> I think the excitement from the developer people that were up there. Kelsey- >> The guy who came after Kelsey, what was his name? I didn't catch it, but he was really good. >> Yeah, I mean, it's, what it's all about, right? People that are passionate about solving hard problems and then cannot wait to share it with the community, with the family. >> Yeah. I love the one line, "You kids have it easy today. We walk to school barefoot in the snow back in the day." >> Uphill, both ways. >> Broke the ice to wash our face. >> Vittorio, great to see you, great friend of theCUBE, CUBE alumni, vice president of cross-cloud serves at VMware. A critical new area that's harvesting the fruits coming off the tree as VMware invested in cloud native many years ago. It's all coming to the market, let's see how it develops. Congratulations, good luck, and we'll be back with more coverage here at VMware Explore. I'm John Furrier with Dave Vellante. Stay with us after the short break. (gentle music)

Published Date : Aug 30 2022

SUMMARY :

is now the Explore brand's going And now it's time to start a What's the story from your standpoint, and my life is going to be And in the cloud is the I mean, you know, when you come together for the last five years, right? Yeah, and Aria is the management plane. and Raghu compared it to the and that helped clear up the cloud game. on board as the CEO of VMware. in the open with our open source project. I said, "Okay, so you are... Like you said, we've Somebody's going to have to in the private cloud with us, So my question to you is why, and the right partnership that the cross-cloud services strategy, and the resources to do it, of DevOps, the ops, right? And part of that team is the Ops team. And my dev comment, by the way, and extend the operational platform of IT the tools that they're using. the open source tools And going back to your point And now he's a luminary in the industry. Lay out the sort of news. So, lot of meat around the three pillars, I mean, we saw what AWS did with Nitro. That's the future of Right, the universal licensing model. and now the latest was Microsoft. in the organization. So not just the What is the plan? and I look at the technology, So the market is there. oh yeah, you go hardcore and if the ecosystem can come Well, and you say, And then if you look at What do you mean proprietary is dead? So that's over, you think But I mean, we see a lot It's still happening. Tell that to the entire I mean, you have to have some lock the customers in. and the know-how to be What is the vibe about the family is back together. And there's a wave coming too. I remember the first time to do it over again with the All right, favorite part of the keynote from the developer people I didn't catch it, but he was really good. and then cannot wait to I love the one line, "You that's harvesting the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

Andy GrovePERSON

0.99+

Pat GelsonPERSON

0.99+

KelseyPERSON

0.99+

Andy JassyPERSON

0.99+

JohnPERSON

0.99+

Vittorio ViarengoPERSON

0.99+

2016DATE

0.99+

AWSORGANIZATION

0.99+

2014DATE

0.99+

VMwareORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

VittorioPERSON

0.99+

SMP GlobalORGANIZATION

0.99+

RaghuPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

Last yearDATE

0.99+

HPORGANIZATION

0.99+

Cross Cloud ServicesORGANIZATION

0.99+

12 yearsQUANTITY

0.99+

OneQUANTITY

0.99+

CUBEORGANIZATION

0.99+

last yearDATE

0.99+

DatabricksORGANIZATION

0.99+

80QUANTITY

0.99+

eachQUANTITY

0.99+

Kelsey HightowerPERSON

0.99+

VMworldORGANIZATION

0.99+

80 serversQUANTITY

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

two reasonsQUANTITY

0.99+

TanzuORGANIZATION

0.99+

SnowflakeORGANIZATION

0.98+

last nightDATE

0.98+

VMware ExploreORGANIZATION

0.98+

FiveDATE

0.98+

first yearQUANTITY

0.98+

six years agoDATE

0.98+

first stageQUANTITY

0.98+

MinIOORGANIZATION

0.98+

vSphereTITLE

0.98+

last weekDATE

0.98+

1000 serversQUANTITY

0.98+

SMPORGANIZATION

0.98+

seven years agoDATE

0.98+

this yearDATE

0.97+

both waysQUANTITY

0.97+

Three daysQUANTITY

0.97+

three years agoDATE

0.97+

AriaORGANIZATION

0.97+

first journeyQUANTITY

0.97+

Supercloud22EVENT

0.97+

NSXORGANIZATION

0.97+

Ana Pinheiro Privette, Amazon | Amazon re:MARS 2022


 

>>Okay, welcome back. Everyone. Live cube coverage here in Las Vegas for Amazon re Mars hot event, machine learning, automation, robotics, and space. Two days of live coverage. We're talking to all the hot technologists. We got all the action startups and segment on sustainability and F pan hero for vet global lead, Amazon sustainability data initiative. Thanks for coming on the cube. Can I get that right? Can >>You, you, you did. >>Absolutely. Okay, great. <laugh> thank >>You. >>Great to see you. We met at the analyst, um, mixer and, um, blown away by the story going on at Amazon around sustainability data initiative, because we were joking. Everything's a data problem now, cuz that's cliche. But in this case you're using data in your program and it's really kind of got a bigger picture. Take a minute to explain what your project is, scope of it on the sustainability. >>Yeah, absolutely. And thank you for the opportunity to be here. Yeah. Um, okay. So, um, I, I lead this program that we launched several years back in 2018 more specifically, and it's a tech for good program. And when I say the tech for good, what that means is that we're trying to bring our technology and our infrastructure and lend that to the world specifically to solve the problems related to sustainability. And as you said, sustainability, uh, inherently needs data. You need, we need data to understand the baseline of where we are and also to understand the progress that we are making towards our goals. Right? But one of the big challenges that the data that we need is spread everywhere. Some of it is too large for most people to be able to, um, access and analyze. And so, uh, what we're trying to tackle is really the data problem in the sustainability space. >>Um, what we do more specifically is focus on Democrat democratizing access to data. So we work with a broader community and we try to understand what are those foundational data sets that most people need to use in the space to solve problems like climate change or food security or think about sustainable development goals, right? Yeah. Yeah. Like all the broad space. Um, and, and we basically then work with the data providers, bring the data to the cloud, make it free and open to everybody in the world. Um, I don't know how deep you want me to go into it. There's many other layers into that. So >>The perspective is zooming out. You're, you're, you're looking at creating a system where the democratizing data means making it freely available so that practitioners or citizens, data, Wrangler, people interested in helping the world could get access to it and then maybe collaborate with people around the world. Is that right? >>Absolutely. So one of the advantages of using the cloud for this kind of, uh, effort is that, you know, cloud is virtually accessible from anywhere where you have, you know, internet or bandwidth, right? So, uh, when, when you put data in the cloud in a centralized place next to compute, it really, uh, removes the, the need for everybody to have their own copy. Right. And to bring it into that, the traditional way is that you bring the data next to your compute. And so we have this multiple copies of data. Some of them are on the petabyte scale. There's obviously the, the carbon footprint associated with the storage, but there's also the complexity that not everybody's able to actually analyze and have that kind of storage. So by putting it in the cloud, now anyone in the world independent of where of their computer capabilities can have access to the same type of data to solve >>The problems. You don't remember doing a report on this in 2018 or 2017. I forget what year it was, but it was around public sector where it was a movement with universities and academia, where they were doing some really deep compute where Amazon had big customers. And there was a movement towards a open commons of data, almost like a national data set like a national park kind of vibe that seems to be getting momentum. In fact, this kind of sounds like what you're doing some similar where it's open to everybody. It's kinda like open source meets data. >>Uh, exactly. And, and the truth is that these data, the majority of it's and we primarily work with what we call authoritative data providers. So think of like NASA Noah, you came me office organizations whose mission is to create the data. So they, their mandate is actually to make the data public. Right. But in practice, that's not really the case. Right. A lot of the data is stored like in servers or tapes or not accessible. Um, so yes, you bring the data to the cloud. And in this model that we use, Amazon never actually touches the data and that's very intentional so that we preserve the integrity of the data. The data provider owns the data in the cloud. We cover all the costs, but they commit to making it public in free to anybody. Um, and obviously the computer is next to it. So that's, uh, evaluated. >>Okay. Anna. So give me some examples of, um, some successes. You've had some of the challenges and opportunities you've overcome, take me through some of the activities because, um, this is really needed, right? And we gotta, sustainability is top line conversation, even here at the conference, re Mars, they're talking about saving climate change with space mm-hmm <affirmative>, which is legitimate. And they're talking about all these new things. So it's only gonna get bigger. Yeah. This data, what are some of the things you're working on right now that you can share? >>Yeah. So what, for me, honestly, the most exciting part of all of this is, is when I see the impact that's creating on customers and the community in general, uh, and those are the stories that really bring it home, the value of opening access to data. And, and I would just say, um, the program actually offers in addition to the data, um, access to free compute, which is very important as well. Right? You put the data in the cloud. It's great. But then if you wanna analyze that, there's the cost and we want to offset that. So we have a, basically an open call for proposals. Anybody can apply and we subsidize that. But so what we see by putting the data in the cloud, making it free and putting the compute accessible is that like we see a lot, for instance, startups, startups jump on it very easily because they're very nimble. They, we basically remove all the cost of investing in the acquisition and storage of the data. The data is connected directly to the source and they don't have to do anything. So they easily build their applications on top of it and workloads and turn it on and off if you know, >>So they don't have to pay for it. >>They have to pay, they basically just pay for the computes whenever they need it. Right. So all the data is covered. So that makes it very visible for, for a lot of startups. And then we see anything like from academia and nonprofits and governments working extensively on the data, what >>Are some of the coolest things you've seen come out of the woodwork in terms of, you know, things that built on top of the, the data, the builders out there are creative, all that heavy, lifting's gone, they're being creative. I'm sure there's been some surprises, um, or obvious verticals that jump healthcare jumps out at me. I'm not sure if FinTech has a lot of data in there, but it's healthcare. I can see, uh, a big air vertical, obviously, you know, um, oil and gas, probably concern. Um, >>So we see it all over the space, honestly. But for instance, one of the things that is very, uh, common for people to use this, uh, Noah data like weather data, because no, basically weather impacts almost anything we do, right? So you have this forecast of data coming into the cloud directly streamed from Noah. And, um, a lot of applications are built on top of that. Like, um, forecasting radiation, for instance, for the solar industry or helping with navigation. But I would say some of the stories I love to mention because are very impactful are when we take data to remote places that traditionally did not have access to any data. Yeah. And for instance, we collaborate with a, with a program, a nonprofit called digital earth Africa where they, this is a basically philanthropically supported program to bring earth observations to the African continents in making it available to communities and governments and things like illegal mining fighting, illegal mining are the forestation, you know, for mangroves to deep forest. Um, it's really amazing what they are doing. And, uh, they are managing >>The low cost nature of it makes it a great use case there >>Yes. Cloud. So it makes it feasible for them to actually do this work. >>Yeah. You mentioned the Noah data making me think of the sale drone. Mm-hmm <affirmative> my favorite, um, use case. Yes. Those sales drones go around many them twice on the queue at reinvent over the years. Yeah. Um, really good innovation. That vibe is here too at the show at Remar this week at the robotics showcases you have startups and growing companies in the ML AI areas. And you have that convergence of not obvious to many, but here, this culture is like, Hey, we have, it's all coming together. Mm-hmm <affirmative>, you know, physical, industrial space is a function of the new O T landscape. Mm-hmm <affirmative>. I mean, there's no edge in space as they say, right. So the it's unlimited edge. So this kind of points to the major trend. It's not stopping this innovation, but sustainability has limits on earth. We have issues. >>We do have issues. And, uh, and I, I think that's one of my hopes is that when we come to the table with the resources and the skills we have and others do as well, we try to remove some of these big barriers, um, that make it things harder for us to move forward as fast as we need to. Right. We don't have time to spend that. Uh, you know, I've been accounted that 80% of the effort to generate new knowledge is spent on finding the data you need and cleaning it. Uh, we, we don't have time for that. Right. So can we remove that UN differentiated, heavy lifting and allow people to start at a different place and generate knowledge and insights faster. >>So that's key, that's the key point having them innovate on top of it, right. What are some things that you wanna see happen over the next year or two, as you look out, um, hopes, dreams, KPIs, performance metrics, what are you, what are you driving to? What's your north star? What are some of those milestones? >>Yeah, so some, we are investing heavily in some areas. Uh, we support, um, you know, we support broadly sustainability, which as, you know, it's like, it's all over, <laugh> the space, but, uh, there's an area that is, uh, becoming more and more critical, which is climate risk. Um, climate risk, you know, for obvious reasons we are experienced, but also there's more regulatory pressures on, uh, business and companies in general to disclose their risks, not only the physical, but also to transition risks. And that's a very, uh, data heavy and compute heavy space. Right. And so we are very focusing in trying to bring the right data and the right services to support that kind of, of activity. >>What kind of break was you looking for? >>Um, so I think, again, it goes back to this concept that there's all that effort that needs to be done equally by so many people that we are all repeating the effort. So I'll put a plug here actually for a project we are supporting, which is called OS climates. Um, I don't know if you're familiar with it, but it's the Linux foundation effort to create an open source platform for climate risk. And so they, they bought the SMP global Airbus, you know, Alliance all these big companies together. And we are one of the funding partners to basically do that basic line work. What are the data that is needed? What are the basic tools let's put it there and do the pre-competitive work. So then you can do the build the, the, the competitive part on top of it. So >>It's kinda like a data clean room. >>It kind of is right. But we need to do those things, right. So >>Are they worried about comp competitive data or is it more anonymized out? How do you, >>It has both actually. So we are primarily contributing, contributing with the open data part, but there's a lot of proprietary data that needs to be behind the whole, the walls. So, yeah, >>You're on the cutting edge of data engineering because, you know, web and ad tech technologies used to be where all that data sharing was done. Mm-hmm <affirmative> for the commercial reasons, you know, the best minds in our industry quoted by a cube alumni are working on how to place ads better. Yeah. Jeff Acker, founder of Cloudera said that on the cube. Okay. And he was like embarrassed, but the best minds are working on how to make ads get more efficient. Right. But that tech is coming to problem solving and you're dealing with data exchange data analysis from different sources, third parties. This is a hard problem. >>Well, it is a hard problem. And I'll, I'll my perspective is that the hardest problem with sustainability is that it goes across all kinds of domains. Right. We traditionally been very comfortable working in our little, you know, swimming lanes yeah. Where we don't need to deal with interoperability and, uh, extracting knowledge. But sustainability, you, you know, you touch the economic side, it touches this social or the environmental, it's all connected. Right. And you cannot just work in the little space and then go sets the impact in the other one. So it's going to force us to work in a different way. Right. It's, uh, big data complex data yeah. From different domains. And we need to somehow make sense of all of it. And there's the potential of AI and ML and things like that that can really help us right. To go beyond the, the modeling approaches we've been done so >>Far. And trust is a huge factor in all this trust. >>Absolutely. And, and just going back to what I said before, that's one of the main reasons why, when we bring data to the cloud, we don't touch it. We wanna make sure that anybody can trust that the data is nowhere data or NASA data, but not Amazon data. >>Yes. Like we always say in the cube, you should own your data plane. Don't give it up. <laugh> well, that's cool. Great. Great. To hear the update. Is there any other projects that you're working on you think might be cool for people that are watching that you wanna plug or point out because this is an area people are, are leaning into yeah. And learning more young, younger talents coming in. Um, I, whether it's university students to people on side hustles want to play with data, >>So we have plenty of data. So we have, uh, we have over a hundred data sets, uh, petabytes and petabytes of data all free. You don't even need an AWS account to access the data and take it out if you want to. Uh, but I, I would say a few things that are exciting that are happening at Mars. One is that we are actually got integrated into ADX. So the AWS that exchange and what that means is that now you can find the open data, free data from a STI in the same searching capability and service as the paid data, right. License data. So hopefully we'll make it easier if I, if you wanna play with data, we have actually something great. We just announced a hackathon this week, uh, in partnership with UNESCO, uh, focus on sustainable development goals, uh, a hundred K in prices and, uh, so much data <laugh> you >>Too years, they get the world is your oyster to go check that out at URL at website, I'll see it's on Amazon. It use our website or a project that can join, or how do people get in touch with you? >>Yeah. So, uh, Amazon SDI, like for Amazon sustainability, that initiative, so Amazon sdi.com and you'll find, um, all the data, a lot of examples of customer stories that are using the data for impactful solutions, um, and much more >>So, and these are, there's a, there's a, a new kind of hustle going out there, seeing entrepreneurs do this. And very successfully, they pick a narrow domain and they, they own it. Something really obscure that could be off the big player's reservation. Mm-hmm <affirmative> and they just become fluent in the data. And it's a big white space for them, right. This market opportunities. And at the minimum you're playing with data. So this is becoming kind of like a long tail domain expertise, data opportunity. Yeah, absolutely. This really hot. So yes. Yeah. Go play around with the data, check it outs for good cause too. And it's free. >>It's all free. >>Almost free. It's not always free. Is it >>Always free? Well, if you, a friend of mine said is only free if your time is worth nothing. <laugh>. Yeah, >>Exactly. Well, Anna, great to have you on the cube. Thanks for sharing the stories. Sustainability is super important. Thanks for coming on. Thank you for the opportunity. Okay. Cube coverage here in Las Vegas. I'm Sean. Furier, we've be back with more day one. After this short break.

Published Date : Jun 23 2022

SUMMARY :

Thanks for coming on the cube. <laugh> thank We met at the analyst, um, mixer and, um, blown away by the story going But one of the big challenges that the data that we need is spread everywhere. So we work with a broader community and we try to understand what are those foundational data that practitioners or citizens, data, Wrangler, people interested in helping the world could And to bring it into that, the traditional way is that you bring the data next to your compute. In fact, this kind of sounds like what you're doing some similar where it's open to everybody. And, and the truth is that these data, the majority of it's and we primarily work with even here at the conference, re Mars, they're talking about saving climate change with space making it free and putting the compute accessible is that like we see a lot, So all the data is covered. I can see, uh, a big air vertical, obviously, you know, um, oil the African continents in making it available to communities and governments and So it makes it feasible for them to actually do this work. So the it's unlimited edge. I've been accounted that 80% of the effort to generate new knowledge is spent on finding the data you So that's key, that's the key point having them innovate on top of it, right. not only the physical, but also to transition risks. that needs to be done equally by so many people that we are all repeating the effort. But we need to do those things, right. So we are primarily contributing, contributing with the open data part, Mm-hmm <affirmative> for the commercial reasons, you know, And I'll, I'll my perspective is that the hardest problem that the data is nowhere data or NASA data, but not Amazon data. people that are watching that you wanna plug or point out because this is an area people are, So the AWS that It use our website or a project that can join, or how do people get in touch with you? um, all the data, a lot of examples of customer stories that are using the data for impactful solutions, And at the minimum you're playing with data. It's not always free. Well, if you, a friend of mine said is only free if your time is worth nothing. Thanks for sharing the stories.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff AckerPERSON

0.99+

AnnaPERSON

0.99+

AmazonORGANIZATION

0.99+

2017DATE

0.99+

2018DATE

0.99+

80%QUANTITY

0.99+

ClouderaORGANIZATION

0.99+

UNESCOORGANIZATION

0.99+

Two daysQUANTITY

0.99+

Las VegasLOCATION

0.99+

SeanPERSON

0.99+

NASAORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Ana Pinheiro PrivettePERSON

0.99+

AirbusORGANIZATION

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.97+

twiceQUANTITY

0.96+

FinTechORGANIZATION

0.96+

DemocratORGANIZATION

0.95+

this weekDATE

0.95+

SMPORGANIZATION

0.95+

OneQUANTITY

0.93+

over a hundred data setsQUANTITY

0.93+

LinuxTITLE

0.92+

MarsLOCATION

0.92+

next yearDATE

0.91+

NoahORGANIZATION

0.91+

WranglerPERSON

0.91+

NoahPERSON

0.85+

a hundred KQUANTITY

0.82+

AllianceORGANIZATION

0.82+

earthLOCATION

0.78+

ADXTITLE

0.78+

petabytesQUANTITY

0.68+

MARS 2022DATE

0.66+

Mars hotEVENT

0.64+

several yearsDATE

0.55+

AfricaLOCATION

0.54+

RemarLOCATION

0.54+

AfricanOTHER

0.52+

twoQUANTITY

0.5+

dayQUANTITY

0.44+

sdi.comTITLE

0.41+

Matthew Carroll, Immuta | Snowflake Summit 2022


 

(Upbeat music) >> Hey everyone. Welcome back to theCUBE's continuing coverage day two Snowflake Summit '22 live from Caesar's forum in Las Vegas. Lisa Martin here with Dave Vellante, bringing you wall to wall coverage yesterday, today, and tomorrow. We're excited to welcome Matthew Carroll to the program. The CEO of Immuta, we're going to be talking about removing barriers to secure data access security. Matthew, welcome. >> Thank you for having me, appreciate it. >> Talk to the audience a little bit about Immuta you're a Snowflake premier technology partner, but give him an overview of Immuta what you guys do, your vision, all that good stuff. >> Yeah, absolutely, thanks. Yeah, if you think about what Immunta at it's core is, we're a data security platform for the modern data stack, right? So what does that mean? It means that we embed natively into a Snowflake and we enforce policies on data, right? So, the rules to be able to use it, to accelerate data access, right? So, that means connecting to the data very easily controlling it with any regulatory or security policy on it as well as contractual policies, and then being able to audit it. So, that way, any corporation of any size can leverage their data and share that data without risking leaking it or potentially violating a regulation. >> What are some of the key as we look at industry by industry challenges that Immuta is helping those customers address and obviously quickly since everything is accelerating. >> Yeah. And it's, you're seeing it 'cause the big guys like Snowflake are verticalizing, right? You're seeing a lot of industry specific, you know, concepts. With us, if you think of, like, where we live obviously policies on data regulated, right? So healthcare, how do we automate HIPAA compliance? How do we redesign clinical trial management post COVID, right? If you're going to have billions of users and you're collecting that data, pharmaceutical companies can't wait to collect that data. They need to remove those barriers. So, they need to be able to collect it, secure it, and be able to share it. Right? So, double and triple blinded studies being redesigned in the cloud. Government organizations, how do we share security information globally with different countries instantaneously? Right? So these are some of the examples where we're helping organizations transform and be able to kind of accelerate their adoption of data. >> Matt, I don't know if you remember, I mean, I know you remember coming to our office. But we had an interesting conversation and I was telling Lisa. Years ago I wrote a piece of you know, how to build on top of, AWS. You know, there's so much opportunity. And we had a conversation, at our office, theCUBE studios in Marlborough, Massachusetts. And we both, sort of, agreed that there was this new workload emerging. We said, okay, there's AWS, there's Snowflake at the time, we were thinking, and you bring machine learning, at time where we were using data bricks, >> Yeah. >> As the example, of course now it's been a little bit- >> Yeah. Careful. >> More of a battle, right, with those guys. But, and so, you see them going in their different directions, but the premise stands is that there's an ecosystem developing, new workloads developing, on top of the hyper scale infrastructure. And you guys play a part in that. So, describe what you're seeing there 'cause you were right on in that conversation. >> Yeah. Yeah. >> It's nice to be, right. >> Yeah. So when you think of this design pattern, right, is you have a data lake, you have a warehouse, and you have an exchange, right? And this architecture is what you're seeing around you now, is this is every single organization in the world is adopting this design pattern. The challenge that where we fit into kind of a sliver of this is, the way we used to do before is application design, right? And we would build lots of applications, and we would build all of our business logic to enforce security controls and policies inside each app. And you'd go through security and get it approved. In this paradigm, any user could potentially access any data. There's just too many data sources, too many users, and too many things that can go wrong. And to scale that is really hard. So, like, with Immuta, what we've done, versus what everyone else has done is we natively embedded into every single one of those compute partners. So ,Snowflake, data breaks, big query, Redshift, synapse on and on. Natively underneath the covers, so that was BI tools, those data science tools hit Snowflake. They don't have to rewrite any of their code, but we automatically enforce policy without them having to do anything. And then we consistently audit that. I call that the separation of policy from platform. So, just like in the world in big data, when we had to separate compute from storage, in this world, because we're global, right? So we're, we have a distributed workforce and our data needs to abide by all these new security rules and regulations. We provide a flexible framework for them to be able to operate at that scale. And we're the only ones in the world doing it. >> Dave Vellante: See the key there is, I mean, Snowflake is obviously building out its data cloud and the functions that it's building in are quite impressive. >> Yeah. >> Dave Vellante: But you know at some point a customer's going to say, look I have other stuff, whether it's in an Oracle database, or data lake or wherever, and that should just be a node on this global, whatever you want to call it, mesh or fabric. And then if I'm hearing you right, you participate in all of that. >> Correct? Yeah We kind of, we were able to just natively inject into each, and then be able to enforce that policy consistently, right? So, hey, can you access HIPAA data? Who are you? Are you authorized to use this? What's the purpose you want to query this data? Is it for fraud? Is it for marketing? So, what we're trying to do as part of this new design paradigm is ensure that we can automate nearly the entire data access process, but with the confidence and de-risk it, that's kind of the key thing. But the one thing I will mention is I think we talk a lot about the core compute, but I think, especially at this summit, data sharing is everything. Right? And this concept of no copy data sharing, because the data is too big and there's too many sets to share, that's the keys to the kingdom. You got to get your lake and your warehouse set with good policy, so you can effectively share it. >> Yeah, so, I wanted to just to follow up, if I may. So, you'd mentioned separating compute from storage and a lot of VC money poured into that. A lot of VC money poured into cloud database. How do you see, do you see Snowflake differentiating substantially from all the other cloud databases? And how so? >> I think it's the ease of use, right? Apple produces a phone that isn't much different than other competitors. Right? But what they do is, end to end, they provide an experience that's very simple. Right? And so yes. Are there other warehouses? Are there other ways to, you know you heard about their analytic workloads now, you know through unistore, where they're going to be able to process analytical workloads as well as their ad hoc queries. I think other vendors are obviously going to have the same capabilities, but I think the user experience of Snowflake right now is top tier. Right? Is I can, whether I'm a small business, I can load my debt in there and build an app really quickly. Or if I'm a JP Morgan or, you know, a West Farmer's I can move legacy, you know monolithic architectures in there in months. I mean, these are six months transitions. When think about 20 years of work is now being transitioned to the cloud in six months. That's the difference. >> So measuring ease of views and time to value, time to market. >> Yeah. That's it's everything is time to value. No one wants to manage the infrastructure. In the Hudup world, no one wants to have expensive customized engineers that are, you know, keeping up your Hudup infrastructure any longer. Those days are completely over. >> Can you share an example of a joint customer, where really the joint value proposition that Immuta and Snowflake bring, are delivering some pretty substantial outcomes? >> Yeah. I, what we're seeing is and we're obviously highly incentivized to get them in there because it's easier on us, right? Because we can leverage their row and com level security. We can leverage their features that they've built in to provide a better experience to our customers. And so when we talk about large banks, they're trying to move Terra data workloads into Snowflake. When we talk about clinical trial management, they're trying to get away from physical copies of data, and leverage the exchanges of mechanism, so you can manage data contracts, right? So like, you know, when we think of even like a company like Latch, right? Like Latch uses us to be able to oversee all of the consumer data they have. Without like a Snowflake, what ends up happening is they end up having to double down and invest on their own people building out all their own infrastructure. And they don't have the capital to invest in third party tools like us that keep them safe, prevent data leaks, allow them to do more and get more value out of their data, which is what they're good at. >> So TCO reduction I'm hearing. >> Matthew Carroll: Yes, exactly. >> Matt, where are you as a company, you've obviously made a lot of progress since we last talked. Maybe give us the update on you know, the headcount, and fundraising, and- >> Yeah, we're just at about 250 people, which scares me every day, but it's awesome. But yeah, we've just raised 100 million dollars- >> Lisa Martin: Saw that, congratulations. >> Series E, thank you, with night dragon leading it. And night dragon was very tactical as well. We are moving, we found that data governance, I think what you're seeing in the market now is the catalog players are really maturing, and they're starting to add a suite of features around governance, right? So quality control, observability, and just traditional asset management around their data. What we are finding is is that there's a new gap in this space, right? So if you think about legacy it's we had infrastructure security we had the four walls and we protect our four walls. Then we moved to network security. We said, oh, the adversary is inside zero trust. So, let's protect all of our endpoints, right? But now we're seeing is data is the security flaw data could be, anyone could potentially access it in this organization. So how do we protect data? And so what we have matured into is a data security company. What we have found is, there's this next generation of data security products that are missing. And it's this blend between authentication like an, an Okta or an AuthO and auth- I'm sorry, authorization. Like Immuta, where we're authorizing certain access. And we have to pair together, with the modern observability, like a data dog, to provide an a layer above this modern data stack, to protect the data to analyze the users, to look for threats. And so Immuta has transformed with this capital. And we brought Dave DeWalt onto our board because he's a cybersecurity expert, he gives us that understanding of what is it like to sell into this modern cyber environment. So now, we have this platform where we can discover data, analyze it, tag it, understand its risk, secure it to author and enforce policies. And then monitor, the key thing is monitoring. Who is using the data? Why are they using the data? What are the risks to that? In order to enforce the security. So, we are a data security platform now with this raise. >> Okay. That, well, that's a new, you know, vector for you guys. I always saw you as an adjacency, but you're saying smack dab in the heart >> Matthew Carroll: Yes. Yeah. We're jumping right in. What we've seen is there is a massive global gap. Data is no longer just in one country. So it is, how do we automate policy enforcement of regulatory oversight, like GDPR or CCPA, which I think got this whole category going. But then we quickly realized is, well we have data jurisdiction. So, where does that data have to live? Where can I send it to? Because from Europe to us, what's the export treaty? We don't have defined laws anymore. So we needed a flexible framework to handle that. And now what we're seeing is data leaks, upon data leaks, and you know, the Snowflakes and the other cloud compute vendors, the last thing they ever want is a data leak out of their ecosystem. So, the security aspects are now becoming more and more important. It's going to be an insider threat. It's someone that already has access to that and has the rights to it. That's going to be the risk. And there is no pattern for a data scientist. There's no zero trust model for data. So we have to create that. >> How are you, last question, how are you going to be using a 100 million raised in series E funding, which you mentioned, how are you going to be leveraging that investment to turn the volume up on data security? >> Well, and we still have also another 80 million still in the bank from our last raise, so 180 million now, and potentially more soon, we'll kind of throw that out there. But, the first thing is M and A I believe in a recessing market, we're going to see these platforms consolidate. Larger customer of ours are driving us to say, Hey, we need less tools. We need to make this easier. So we can go faster. They're, even in a recessing market, these customers are not going to go slower. They're moving in the cloud as fast as possible, but it needs to be easier, right? It's going back to the mid nineties kind of Lego blocks, right? Like the IBM, the SAP, the Informatica, right? So that's number one. Number two is investing globally. Customer success, engineering, support, 24 by seven support globally. Global infrastructure on cloud, moving to true SaaS everywhere in the world. That's where we're going. So sales, engineering, and customer success globally. And the third is, is doubling down on R and D. That monitor capability, we're going to be building software around. How do we monitor and understand risk of users, third parties. So how do you handle data contracts? How do you handle data use agreements? So those are three areas we're focused on. >> Dave Vellante: How are you scaling go to market at this point? I mean, I presume you are. >> Yeah, well, I think as we're leveraging these types of engagements, so like our partners are the big cloud compute vendors, right? Those data clouds. We're injecting as much as we can into them and helping them get more workloads onto their infrastructure because it benefits us. And then obviously we're working with GSIs and then RSIs to kind of help with this transformation, but we're all in, we're actually deprecating support of legacy connectors. And we're all in on cloud compute. >> How did the pivot to all in on security, how did it affect your product portfolio? I mean, is that more positioning or was there other product extensions that where you had to test product market fit? >> Yeah. This comes out of customer drive. So we've been holding customer advisory boards across Europe, Asia and U.S. And what we just saw was a pattern of some of these largest banks and pharmaceutical companies and insurance companies in the world was, hey we need to understand who is actually on our data. We have a better understanding of our data now, but we don't actually understand why they're using our data. Why are they running these types of queries? Is this machine, you know logic, that we're running on this now, we invested all this money in AI. What's the risk? They just don't know. And so, yeah, it's going to change our product portfolio. We modularized our platform to the street components over the past year, specifically now, so we can start building custom applications on top of it, for specific users like the CSO, like, you know, the legal department, and like third party regulators to come in, as well as as going back to data sharing, to build data use agreements between one or many entities, right? So an SMP global can expose their data to third parties and have one consistent digital contract, no more long memo that you have to read the contract, like, Immuta can automate those data contracts between one or many entities. >> Dave Vellante: And make it a checkbox item. >> It's just a checkbox, but then you can audit it all, right? >> The key thing is this, I always tell people, there's negligence and gross negligence. Negligence, you can go back and fix something, gross negligence you don't have anything to put into controls. Regulators want you to be at least negligent, grossly negligent. They get upset. (laughs) >> Matthew, it sounds like great stuff is going on at Immuta, lots of money in the bank. And it sounds like a very clear and strategic vision and direction. We thank you so much for joining us on theCUBE this morning. >> Thank you so much >> For our guest and Dave Vellante, I'm Lisa Martin, you're watching theCUBE's coverage of day two, Snowflake Summit '22, coming at ya live, from the show floor in Las Vegas. Be right back with our next guest. (Soft music)

Published Date : Jun 15 2022

SUMMARY :

Matthew Carroll to the program. of Immuta what you guys do, your vision, So, the rules to be able to use it, What are some of the key So, they need to be able to collect it, at the time, we were thinking, And you guys play a part in that. of our business logic to Dave Vellante: See the key there is, on this global, whatever you What's the purpose you just to follow up, if I may. they're going to be able to and time to value, time to market. that are, you know, keeping And they don't have the capital to invest Matt, where are you as a company, Yeah, we're just at about 250 people, What are the risks to that? I always saw you That's going to be the risk. but it needs to be easier, right? I mean, I presume you are. and then RSIs to kind of help the CSO, like, you know, Dave Vellante: And Regulators want you to be at Immuta, lots of money in the bank. from the show floor in Las Vegas.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Matthew CarrollPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

MatthewPERSON

0.99+

IBMORGANIZATION

0.99+

EuropeLOCATION

0.99+

Dave DeWaltPERSON

0.99+

AWSORGANIZATION

0.99+

LisaPERSON

0.99+

Las VegasLOCATION

0.99+

MattPERSON

0.99+

ImmutaORGANIZATION

0.99+

tomorrowDATE

0.99+

100 million dollarsQUANTITY

0.99+

AsiaLOCATION

0.99+

AppleORGANIZATION

0.99+

U.S.LOCATION

0.99+

InformaticaORGANIZATION

0.99+

todayDATE

0.99+

six monthsQUANTITY

0.99+

LatchORGANIZATION

0.99+

100 millionQUANTITY

0.99+

SAPORGANIZATION

0.99+

180 millionQUANTITY

0.99+

HudupORGANIZATION

0.99+

JP MorganORGANIZATION

0.99+

thirdQUANTITY

0.99+

ImmuntaORGANIZATION

0.99+

yesterdayDATE

0.99+

Marlborough, MassachusettsLOCATION

0.99+

theCUBEORGANIZATION

0.99+

one countryQUANTITY

0.98+

each appQUANTITY

0.98+

West FarmerORGANIZATION

0.98+

Snowflake Summit '22EVENT

0.98+

SnowflakeTITLE

0.98+

80 millionQUANTITY

0.98+

oneQUANTITY

0.97+

OracleORGANIZATION

0.97+

bothQUANTITY

0.97+

mid ninetiesDATE

0.96+

GDPRTITLE

0.96+

SnowflakesEVENT

0.96+

day twoQUANTITY

0.95+

HIPAATITLE

0.95+

Snowflake Summit '22EVENT

0.94+

24QUANTITY

0.94+

about 20 yearsQUANTITY

0.93+

first thingQUANTITY

0.93+

this morningDATE

0.93+

past yearDATE

0.92+

sevenQUANTITY

0.92+

SnowflakeORGANIZATION

0.92+

Snowflake Summit 2022EVENT

0.92+

about 250 peopleQUANTITY

0.92+

eachQUANTITY

0.91+

dayQUANTITY

0.87+

doubleQUANTITY

0.87+

SMPORGANIZATION

0.85+

unistoreTITLE

0.84+

tripleQUANTITY

0.84+

SnowflakeEVENT

0.83+

CaesarPERSON

0.82+

three areasQUANTITY

0.82+

GSIsORGANIZATION

0.81+

Years agoDATE

0.8+

Christian Keynote with Disclaimer


 

(upbeat music) >> Hi everyone, thank you for joining us at the Data Cloud Summit. The last couple of months have been an exciting time at Snowflake. And yet, what's even more compelling to all of us at Snowflake is what's ahead. Today I have the opportunity to share new product developments that will extend the reach and impact of our Data Cloud and improve the experience of Snowflake users. Our product strategy is focused on four major areas. First, Data Cloud content. In the Data Cloud silos are eliminated and our vision is to bring the world's data within reach of every organization. You'll hear about new data sets and data services available in our data marketplace and see how previous barriers to sourcing and unifying data are eliminated. Second, extensible data pipelines. As you gain frictionless access to a broader set of data through the Data Cloud, Snowflakes platform brings additional capabilities and extensibility to your data pipelines, simplifying data ingestion, and transformation. Third, data governance. The Data Cloud eliminates silos and breaks down barriers and in a world where data collaboration is the norm, the importance of data governance is ratified and elevated. We'll share new advancements to support how the world's most demanding organizations mobilize your data while maintaining high standards of compliance and governance. Finally, our fourth area focuses on platform performance and capabilities. We remain laser focused on continuing to lead with the most performant and capable data platform. We have some exciting news to share about the core engine of Snowflake. As always, we love showing you Snowflake in action, and we prepared some demos for you. Also, we'll keep coming back to the fact that one of the characteristics of Snowflake that we're proud as staff is that we offer a single platform from which you can operate all of your data workloads, across clouds and across regions, which workloads you may ask, specifically, data warehousing, data lake, data science, data engineering, data applications, and data sharing. Snowflake makes it possible to mobilize all your data in service of your business without the cost, complexity and overhead of managing multiple systems, tools and vendors. Let's dive in. As you heard from Frank, the Data Cloud offers a unique capability to connect organizations and create collaboration and innovation across industries fueled by data. The Snowflake data marketplace is the gateway to the Data Cloud, providing visibility for organizations to browse and discover data that can help them make better decisions. For data providers on the marketplace, there is a new opportunity to reach new customers, create new revenue streams, and radically decrease the effort and time to data delivery. Our marketplace dramatically reduces the friction of sharing and collaborating with data opening up new possibilities to all participants in the Data Cloud. We introduced the Snowflake data marketplace in 2019. And it is now home to over 100 data providers, with half of them having joined the marketplace in the last four months. Since our most recent product announcements in June, we have continued broadening the availability of the data marketplace, across regions and across clouds. Our data marketplace provides the opportunity for data providers to reach consumers across cloud and regional boundaries. A critical aspect of the Data Cloud is that we envisioned organizations collaborating not just in terms of data, but also data powered applications and services. Think of instances where a provider doesn't want to open access to the entirety of a data set, but wants to provide access to business logic that has access and leverages such data set. That is what we call data services. And we want Snowflake to be the platform of choice for developing discovering and consuming such rich building blocks. To see How the data marketplace comes to live, and in particular one of these data services, let's jump into a demo. For all of our demos today, we're going to put ourselves in the shoes of a fictional global insurance company. We've called it Insureco. Insurance is a data intensive and highly regulated industry. Having the right access control and insight from data is core to every insurance company's success. I'm going to turn it over to Prasanna to show how the Snowflake data marketplace can solve a data discoverability and access problem. >> Let's look at how Insureco can leverage data and data services from the Snowflake data marketplace and use it in conjunction with its own data in the Data Cloud to do three things, better detect fraudulent claims, arm its agents with the right information, and benchmark business health against competition. Let's start with detecting fraudulent claims. I'm an analyst in the Claims Department. I have auto claims data in my account. I can see there are 2000 auto claims, many of these submitted by auto body shops. I need to determine if they are valid and legitimate. In particular, could some of these be insurance fraud? By going to the Snowflake data marketplace where numerous data providers and data service providers can list their offerings, I find the quantifying data service. It uses a combination of external data sources and predictive risk typology models to inform the risk level of an organization. Quantifying external sources include sanctions and blacklists, negative news, social media, and real time search engine results. That's a wealth of data and models built on that data which we don't have internally. So I'd like to use Quantifind to determine a fraud risk score for each auto body shop that has submitted a claim. First, the Snowflake data marketplace made it really easy for me to discover a data service like this. Without the data marketplace, finding such a service would be a lengthy ad hoc process of doing web searches and asking around. Second, once I find Quantifind, I can use Quantifind service against my own data in three simple steps using data sharing. I create a table with the names and addresses of auto body shops that have submitted claims. I then share the table with Quantifind to start the risk assessment. Quantifind does the risk scoring and shares the data back with me. Quantifind uses external functions which we introduced in June to get results from their risk prediction models. Without Snowflake data sharing, we would have had to contact Quantifind to understand what format they wanted the data in, then extract this data into a file, FTP the file to Quantifind, wait for the results, then ingest the results back into our systems for them to be usable. Or I would have had to write code to call Quantifinds API. All of that would have taken days. In contrast, with data sharing, I can set this up in minutes. What's more, now that I have set this up, as new claims are added in the future, they will automatically leverage Quantifind's data service. I view the scores returned by Quantifind and see the two entities in my claims data have a high score for insurance fraud risk. I open up the link returned by Quantifind to read more, and find that this organization has been involved in an insurance crime ring. Looks like that is a claim that we won't be approving. Using the Quantifind data service through the Snowflake data marketplace gives me access to a risk scoring capability that we don't have in house without having to call custom APIs. For a provider like Quantifind this drives new leads and monetization opportunities. Now that I have identified potentially fraudulent claims, let's move on to the second part. I would like to share this fraud risk information with the agents who sold the corresponding policies. To do this, I need two things. First, I need to find the agents who sold these policies. Then I need to share with these agents the fraud risk information that we got from Quantifind. But I want to share it such that each agent only sees the fraud risk information corresponding to claims for policies that they wrote. To find agents who sold these policies, I need to look up our Salesforce data. I can find this easily within Insureco's internal data exchange. I see there's a listing with Salesforce data. Our sales Ops team has published this listing so I know it's our officially blessed data set, and I can immediately access it from my Snowflake account without copying any data or having to set up ETL. I can now join Salesforce data with my claims to identify the agents for the policies that were flagged to have fraudulent claims. I also have the Snowflake account information for each agent. Next, I create a secure view that joins on an entitlements table, such that each agent can only see the rows corresponding to policies that they have sold. I then share this directly with the agents. This share contains the secure view that I created with the names of the auto body shops, and the fraud risk identified by Quantifind. Finally, let's move on to the third and last part. Now that I have detected potentially fraudulent claims, I'm going to move on to building a dashboard that our executives have been asking for. They want to see how Insureco compares against other auto insurance companies on key metrics, like total claims paid out for the auto insurance line of business nationwide. I go to the Snowflake data marketplace and find SNL U.S. Insurance Statutory Data from SNP. This data is included with Insureco's existing subscription with SMP so when I request access to it, SMP can immediately share this data with me through Snowflake data sharing. I create a virtual database from the share, and I'm ready to query this data, no ETL needed. And since this is a virtual database, pointing to the original data in SNP Snowflake account, I have access to the latest data as it arrives in SNPs account. I see that the SNL U.S. Insurance Statutory Data from SNP has data on assets, premiums earned and claims paid out by each us insurance company in 2019. This data is broken up by line of business and geography and in many cases goes beyond the data that would be available from public financial filings. This is exactly the data I need. I identify a subset of comparable insurance companies whose net total assets are within 20% of Insureco's, and whose lines of business are similar to ours. I can now create a Snow site dashboard that compares Insureco against similar insurance companies on key metrics, like net earned premiums, and net claims paid out in 2019 for auto insurance. I can see that while we are below median our net earned premiums, we are doing better than our competition on total claims paid out in 2019, which could be a reflection of our improved claims handling and fraud detection. That's a good insight that I can share with our executives. In summary, the Data Cloud enabled me to do three key things. First, seamlessly fine data and data services that I need to do my job, be it an external data service like Quantifind and external data set from SNP or internal data from Insureco's data exchange. Second, get immediate live access to this data. And third, control and manage collaboration around this data. With Snowflake, I can mobilize data and data services across my business ecosystem in just minutes. >> Thank you Prasanna. Now I want to turn our focus to extensible data pipelines. We believe there are two different and important ways of making Snowflakes platform highly extensible. First, by enabling teams to leverage services or business logic that live outside of Snowflake interacting with data within Snowflake. We do this through a feature called external functions, a mechanism to conveniently bring data to where the computation is. We announced this feature for calling regional endpoints via AWS gateway in June, and it's currently available in public preview. We are also now in public preview supporting Azure API management and will soon support Google API gateway and AWS private endpoints. The second extensibility mechanism does the converse. It brings the computation to Snowflake to run closer to the data. We will do this by enabling the creation of functions and procedures in SQL, Java, Scala or Python ultimately providing choice based on the programming language preference for you or your organization. You will see Java, Scala and Python available through private and public previews in the future. The possibilities enabled by these extensibility features are broad and powerful. However, our commitment to being a great platform for data engineers, data scientists and developers goes far beyond programming language. Today, I am delighted to announce Snowpark a family of libraries that will bring a new experience to programming data in Snowflake. Snowpark enables you to write code directly against Snowflake in a way that is deeply integrated into the languages I mentioned earlier, using familiar concepts like DataFrames. But the most important aspect of Snowpark is that it has been designed and optimized to leverage the Snowflake engine with its main characteristics and benefits, performance, reliability, and scalability with near zero maintenance. Think of the power of a declarative SQL statements available through a well known API in Scala, Java or Python, all these against data governed in your core data platform. We believe Snowpark will be transformative for data programmability. I'd like to introduce Sri to showcase how our fictitious insurance company Insureco will be able to take advantage of the Snowpark API for data science workloads. >> Thanks Christian, hi, everyone? I'm Sri Chintala, a product manager at Snowflake focused on extensible data pipelines. And today, I'm very excited to show you a preview of Snowpark. In our first demo, we saw how Insureco could identify potentially fraudulent claims. Now, for all the valid claims InsureCo wants to ensure they're providing excellent customer service. To do that, they put in place a system to transcribe all of their customer calls, so they can look for patterns. A simple thing they'd like to do is detect the sentiment of each call so they can tell which calls were good and which were problematic. They can then better train their claim agents for challenging calls. Let's take a quick look at the work they've done so far. InsureCo's data science team use Snowflakes external functions to quickly and easily train a machine learning model in H2O AI. Snowflake has direct integrations with H2O and many other data science providers giving Insureco the flexibility to use a wide variety of data science libraries frameworks or tools to train their model. Now that the team has a custom trained sentiment model tailored to their specific claims data, let's see how a data engineer at Insureco can use Snowpark to build a data pipeline that scores customer call logs using the model hosted right inside of Snowflake. As you can see, we have the transcribed call logs stored in the customer call logs table inside Snowflake. Now, as a data engineer trained in Scala, and used to working with systems like Spark and Pandas, I want to use familiar programming concepts to build my pipeline. Snowpark solves for this by letting me use popular programming languages like Java or Scala. It also provides familiar concepts in APIs, such as the DataFrame abstraction, optimized to leverage and run natively on the Snowflake engine. So here I am in my ID, where I've written a simple scalar program using the Snowpark libraries. The first step in using the Snowpark API is establishing a session with Snowflake. I use the session builder object and specify the required details to connect. Now, I can create a DataFrame for the data in the transcripts column of the customer call logs table. As you can see, the Snowpark API provides native language constructs for data manipulation. Here, I use the Select method provided by the API to specify the column names to return rather than writing select transcripts as a string. By using the native language constructs provided by the API, I benefit from features like IntelliSense and type checking. Here you can see some of the other common methods that the DataFrame class offers like filters like join and others. Next, I define a get sentiment user defined function that will return a sentiment score for an input string by using our pre trained H2O model. From the UDF, we call the score method that initializes and runs the sentiment model. I've built this helper into a Java file, which along with the model object and license are added as dependencies that Snowpark will send to Snowflake for execution. As a developer, this is all programming that I'm familiar with. We can now call our get sentiment function on the transcripts column of the DataFrame and right back the results of the score transcripts to a new target table. Let's run this code and switch over to Snowflake to see the score data and also all the work that Snowpark has done for us on the back end. If I do a select star from scored logs, we can see the sentiment score of each call right alongside the transcript. With Snowpark all the logic in my program is pushed down into Snowflake. I can see in the query history that Snowpark has created a temporary Java function to host the pre trained H20 model, and that the model is running right in my Snowflake warehouse. Snowpark has allowed us to do something completely new in Snowflake. Let's recap what we saw. With Snowpark, Insureco was able to use their preferred programming language, Scala and use the familiar DataFrame constructs to score data using a machine learning model. With support for Java UDFs, they were able to run a train model natively within Snowflake. And finally, we saw how Snowpark executed computationally intensive data science workloads right within Snowflake. This simplifies Insureco's data pipeline architecture, as it reduces the number of additional systems they have to manage. We hope that extensibility with Scala, Java and Snowpark will enable our users to work with Snowflake in their preferred way while keeping the architecture simple. We are very excited to see how you use Snowpark to extend your data pipelines. Thank you for watching and with that back to you, Christian. >> Thank you Sri. You saw how Sri could utilize Snowpark to efficiently perform advanced sentiment analysis. But of course, if this use case was important to your business, you don't want to fully automate this pipeline and analysis. Imagine being able to do all of the following in Snowflake, your pipeline could start far upstream of what you saw in the demo. By storing your actual customer care call recordings in Snowflake, you may notice that this is new for Snowflake. We'll come back to the idea of storing unstructured data in Snowflake at the end of my talk today. Once you have the data in Snowflake, you can use our streams and past capabilities to call an external function to transcribe these files. To simplify this flow even further, we plan to introduce a serverless execution model for tasks where Snowflake can automatically size and manage resources for you. After this step, you can use the same serverless task to execute sentiment scoring of your transcript as shown in the demo with incremental processing as each transcript is created. Finally, you can surface the sentiment score either via snow side, or through any tool you use to share insights throughout your organization. In this example, you see data being transformed from a raw asset into a higher level of information that can drive business action, all fully automated all in Snowflake. Turning back to Insureco, you know how important data governance is for any major enterprise but particularly for one in this industry. Insurance companies manage highly sensitive data about their customers, and have some of the strictest requirements for storing and tracking such data, as well as managing and governing it. At Snowflake, we think about governance as the ability to know your data, manage your data and collaborate with confidence. As you saw in our first demo, the Data Cloud enables seamless collaboration, control and access to data via the Snowflake data marketplace. And companies may set up their own data exchanges to create similar collaboration and control across their ecosystems. In future releases, we expect to deliver enhancements that create more visibility into who has access to what data and provide usage information of that data. Today, we are announcing a new capability to help Snowflake users better know and organize your data. This is our new tagging framework. Tagging in Snowflake will allow user defined metadata to be attached to a variety of objects. We built a broad and robust framework with powerful implications. Think of the ability to annotate warehouses with cost center information for tracking or think of annotating tables and columns with sensitivity classifications. Our tagging capability will enable the creation of companies specific business annotations for objects in Snowflakes platform. Another key aspect of data governance in Snowflake is our policy based framework where you specify what you want to be true about your data, and Snowflake enforces those policies. We announced one such policy earlier this year, our dynamic data masking capability, which is now available in public preview. Today, we are announcing a great complimentary a policy to achieve row level security to see how role level security can enhance InsureCo's ability to govern and secure data. I'll hand it over to Artin for a demo. >> Hello, I'm Martin Avanes, Director of Product Management for Snowflake. As Christian has already mentioned, the rise of the Data Cloud greatly accelerates the ability to access and share diverse data leading to greater data collaboration across teams and organizations. Controlling data access with ease and ensuring compliance at the same time is top of mind for users. Today, I'm thrilled to announce our new row access policies that will allow users to define various rules for accessing data in the Data Cloud. Let's check back in with Insureco to see some of these in action and highlight how those work with other existing policies one can define in Snowflake. Because Insureco is a multinational company, it has to take extra measures to ensure data across geographic boundaries is protected to meet a wide range of compliance requirements. The Insureco team has been asked to segment what data sales team members have access to based on where they are regionally. In order to make this possible, they will use Snowflakes row access policies to implement row level security. We are going to apply policies for three Insureco's sales team members with different roles. Alice, an executive must be able to view sales data from both North America and Europe. Alex in North America sales manager will be limited to access sales data from North America only. And Jordan, a Europe sales manager will be limited to access sales data from Europe only. As a first step, the security administrator needs to create a lookup table that will be used to determine which data is accessible based on each role. As you can see, the lookup table has the row and their associated region, both of which will be used to apply policies that we will now create. Row access policies are implemented using standard SQL syntax to make it easy for administrators to create policies like the one our administrators looking to implement. And similar to masking policies, row access policies are leveraging our flexible and expressive policy language. In this demo, our admin users to create a row access policy that uses the row and region of a user to determine what row level data they have access to when queries are executed. When users queries are executed against the table protected by such a row access policy, Snowflakes query engine will dynamically generate and apply the corresponding predicate to filter out rows the user is not supposed to see. With the policy now created, let's log in as our Sales Users and see if it worked. Recall that as a sales executive, Alice should have the ability to see all rows from North America and Europe. Sure enough, when she runs her query, she can see all rows so we know the policy is working for her. You may also have noticed that some columns are showing masked data. That's because our administrator's also using our previously announced data masking capabilities to protect these data attributes for everyone in sales. When we look at our other users, we should notice that the same columns are also masked for them. As you see, you can easily combine masking and row access policies on the same data sets. Now let's look at Alex, our North American sales manager. Alex runs to st Korea's Alice, row access policies leverage the lookup table to dynamically generate the corresponding predicates for this query. The result is we see that only the data for North America is visible. Notice too that the same columns are still masked. Finally, let's try Jordan, our European sales manager. Jordan runs the query and the result is only the data for Europe with the same columns also masked. And you reintroduced masking policies, today you saw row access policies in action. And similar to our masking policies, row access policies in Snowflake will be accepted Hands of capability integrated seamlessly across all of Snowflake everywhere you expect it to work it does. If you're accessing data stored in external tables, semi structured JSON data, or building data pipelines via streams or plan to leverage Snowflakes data sharing functionality, you will be able to implement complex row access policies for all these diverse use cases and workloads within Snowflake. And with Snowflakes unique replication feature, you can instantly apply these new policies consistently to all of your Snowflake accounts, ensuring governance across regions and even across different clouds. In the future, we plan to demonstrate how to combine our new tagging capabilities with Snowflakes policies, allowing advanced audit and enforcing those policies with ease. And with that, let's pass it back over to Christian. >> Thank you Artin. We look forward to making this new tagging and row level security capabilities available in private preview in the coming months. One last note on the broad area of data governance. A big aspect of the Data Cloud is the mobilization of data to be used across organizations. At the same time, privacy is an important consideration to ensure the protection of sensitive, personal or potentially identifying information. We're working on a set of product capabilities to simplify compliance with privacy related regulatory requirements, and simplify the process of collaborating with data while preserving privacy. Earlier this year, Snowflake acquired a company called Crypto Numerix to accelerate our efforts on this front, including the identification and anonymization of sensitive data. We look forward to sharing more details in the future. We've just shown you three demos of new and exciting ways to use Snowflake. However, I want to also remind you that our commitment to the core platform has never been greater. As you move workloads on to Snowflake, we know you expect exceptional price performance and continued delivery of new capabilities that benefit every workload. On price performance, we continue to drive performance improvements throughout the platform. Let me give you an example comparing an identical set of customers submitted queries that ran both in August of 2019, and August of 2020. If I look at the set of queries that took more than one second to compile 72% of those improved by at least 50%. When we make these improvements, execution time goes down. And by implication, the required compute time is also reduced. Based on our pricing model to charge for what you use, performance improvements not only deliver faster insights, but also translate into cost savings for you. In addition, we have two new major announcements on performance to share today. First, we announced our search optimization service during our June event. This service currently in public preview can be enabled on a table by table basis, and is able to dramatically accelerate lookup queries on any column, particularly those not used as clustering columns. We initially support equality comparisons only, and today we're announcing expanded support for searches in values, such as pattern matching within strings. This will unlock a number of additional use cases such as analytics on logs data for performance or security purposes. This expanded support is currently being validated by a few customers in private preview, and will be broadly available in the future. Second, I'd like to introduce a new service that will be in private preview in a future release. The query acceleration service. This new feature will automatically identify and scale out parts of a query that could benefit from additional resources and parallelization. This means that you will be able to realize dramatic improvements in performance. This is especially impactful for data science and other scan intensive workloads. Using this feature is pretty simple. You define a maximum amount of additional resources that can be recruited by a warehouse for acceleration, and the service decides when it would be beneficial to use them. Given enough resources, a query over a massive data set can see orders of magnitude performance improvement compared to the same query without acceleration enabled. In our own usage of Snowflake, we saw a common query go 15 times faster without changing the warehouse size. All of these performance enhancements are extremely exciting, and you will see continued improvements in the future. We love to innovate and continuously raise the bar on what's possible. More important, we love seeing our customers adopt and benefit from our new capabilities. In June, we announced a number of previews, and we continue to roll those features out and see tremendous adoption, even before reaching general availability. Two have those announcements were the introduction of our geospatial support and policies for dynamic data masking. Both of these features are currently in use by hundreds of customers. The number of tables using our new geography data type recently crossed the hundred thousand mark, and the number of columns with masking policies also recently crossed the same hundred thousand mark. This momentum and level of adoption since our announcements in June is phenomenal. I have one last announcement to highlight today. In 2014, Snowflake transformed the world of data management and analytics by providing a single platform with first class support for both structured and semi structured data. Today, we are announcing that Snowflake will be adding support for unstructured data on that same platform. Think of the abilities of Snowflake used to store access and share files. As an example, would you like to leverage the power of SQL to reason through a set of image files. We have a few customers as early adopters and we'll provide additional details in the future. With this, you will be able to leverage Snowflake to mobilize all your data in the Data Cloud. Our customers rely on Snowflake as the data platform for every part of their business. However, the vision and potential of Snowflake is actually much bigger than the four walls of any organization. Snowflake has created a Data Cloud a data connected network with a vision where any Snowflake customer can leverage and mobilize the world's data. Whether it's data sets, or data services from traditional data providers for SaaS vendors, our marketplace creates opportunities for you and raises the bar in terms of what is possible. As examples, you can unify data across your supply chain to accelerate your time and quality to market. You can build entirely new revenue streams, or collaborate with a consortium on data for good. The possibilities are endless. Every company has the opportunity to gain richer insights, build greater products and deliver better services by reaching beyond the data that he owns. Our vision is to enable every company to leverage the world's data through seamless and governing access. Snowflake is your window into this data network into this broader opportunity. Welcome to the Data Cloud. (upbeat music)

Published Date : Nov 19 2020

SUMMARY :

is the gateway to the Data Cloud, FTP the file to Quantifind, It brings the computation to Snowflake and that the model is running as the ability to know your data, the ability to access is the mobilization of data to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
InsurecoORGANIZATION

0.99+

ChristianPERSON

0.99+

AlicePERSON

0.99+

August of 2020DATE

0.99+

August of 2019DATE

0.99+

JuneDATE

0.99+

InsureCoORGANIZATION

0.99+

Martin AvanesPERSON

0.99+

EuropeLOCATION

0.99+

QuantifindORGANIZATION

0.99+

PrasannaPERSON

0.99+

15 timesQUANTITY

0.99+

2019DATE

0.99+

AlexPERSON

0.99+

SNPORGANIZATION

0.99+

2014DATE

0.99+

JordanPERSON

0.99+

AWSORGANIZATION

0.99+

ScalaTITLE

0.99+

JavaTITLE

0.99+

72%QUANTITY

0.99+

SQLTITLE

0.99+

TodayDATE

0.99+

North AmericaLOCATION

0.99+

each agentQUANTITY

0.99+

SMPORGANIZATION

0.99+

second partQUANTITY

0.99+

FirstQUANTITY

0.99+

SecondQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

SnowflakeTITLE

0.99+

PythonTITLE

0.99+

each callQUANTITY

0.99+

Sri ChintalaPERSON

0.99+

each roleQUANTITY

0.99+

todayDATE

0.99+

TwoQUANTITY

0.99+

twoQUANTITY

0.99+

BothQUANTITY

0.99+

Crypto NumerixORGANIZATION

0.99+

two entitiesQUANTITY

0.99+

Manosiz Bhattacharyya, Nutanix | Global .NEXT Digital Experience 2020


 

>>from around the globe. It's the queue >>with coverage of the global dot Next digital experience brought to you by Nutanix I'm stew Minuteman. And this is the Cube's coverage of the Nutanix dot next conference This year it is the global dot next digital experience pulling together the events that they had had dispersed across the globe, bringing to you online and happy to welcome to the program. First time guest but a long time Nutanix engineering person, Nanosys Bhattacharya. He's the senior vice president of engineering at Nutanix. Mono is everyone calls him. Thanks so much for joining us. Thank you. All right. So, you know, you know, we've been doing the Cube since, you know, over 10 years now, I remember the early days of talking to Dheeraj and the team when we first bring him on the Cube. It was about taking some of the things that the hyper scale hours did. And bringing that to the enterprise was actually you know, one of the interesting components there dial back a bunch of flash was new to the enterprise, and we've looked at one of the suppliers that was supplying to some of the very largest companies in the world. Um, and also some of the companies in the enterprise, like fusion io. It was a new flash package. And that was something that in the early days Nutanix used before it kind of went to more, I guess commodity flash. But, you know, develop a lead developer, engineers that I talked to came from, you know, Facebook and Oracle and others. Because understanding that database in that underlying substrate to be able to create what is hyper converged infrastructure that people know, is there. So maybe we could start, Just give the audience a little bit. You know, you've been with Nutanix long time, your background and what it is that you and your team work on into inside the company. >>Yeah. Thank you. So, uh, so I think I come from a distributed systems for a long time. I've been working in Oracle for seven years, building parts of the exit data system, some off the convergence that databases have done with beauty in storage. You could see the same hyper convergence in other platforms like I do where computed storage was brought together. I think the Nutanix story was all about Can we get this hyper convergence work for all types of applications. And that was the vision of the company that whatever black home that these hyper scaler to build this big database companies and built, can this be provided for everybody? For all types of applications? I think that was the main goal. And I think we're inching our way, but surely and safely, I think we will be there pretty much. Every application will run on Nutanix states yet. >>Alright, well, and if you look at kind of the underlying code that enables your capability, one of the challenges always out there is, you know, I build a code base with the technology and the skill sets I have. But things changed. I was talking about flash adoption before a lot of changes that happened in the storage world. Compute has gone through a lot of architect role changes, software and location with clouds and the like. So it's just talk about that code base. You talk about building distributed systems. How does Nutanix make sure that that underlying code doesn't kind of, you know, the window doesn't close on how long it's going to be able to take advantage of new features and functionality. >>Yeah, I think Nutanix from the beginning. One thing that we have made sure is that you know, we could always give continuous innovation the choices that we make get like we actually separated the, you know, the concerns between storage and compute. We always had a controller vm running the storage. We actually made sure we could run all of the storage and user space. And over time, what has happened is, every time we abraded us off where people got, you know, faster performance, they get more secure, They've got more scalable. And that, I think, is the key sauce. It's all software. It's all software defined infrastructure on commodity hardware and in the commodity hardware can be anywhere. I mean, you could pretty much build it on a brand. And now that we see, you know, with the hyper scaler is coming on with bare metal is a service. We see hyper convergence as the platform of the infrastructure on which enterprises are willing to run their applications in the public club. I mean, look at new being vmc Nutanix clusters is getting a lot of traction. Even before I mean, we have just gone out a lot of customer excitement there on that is what I think is the is the true nature of Nutanix being a pure software play and cheating every hardware you know uniform and whether this is available in the public cloud or it's available in your own data center, the black at the storage or the hyper visor or the entire infrastructure software that we have that doesn't cheat. So I think in some ways we're talking about this new eight. See, I call the hybrid Cloud Infrastructure to 88. The hyper converge infrastructure becomes the substrate for the new hybrid cloud infrastructure. >>Yeah, definitely. It was a misconception for a number of years. Is people looked at the Nutanix solution and they thought appliance. So if I got a new generation of hardware, if I needed to choose a different harbor vendor? Nutanix is a software company. As you describe you, got some news announced here at the dot next show. When it comes to some of those underlying storage pieces, bring us through. You know, we always we go around to the events and, you know, companies like Intel and NVIDIA always standing up with next generation. I teased up a little bit that we talked about Flash. What's happening with envy me? Storage class memories. So what is it that's new for the Nutanix platform? >>Yeah, let me start a little bit, you know, on what we have done for the last maybe a year or so before, you know, important details off why we did it. And, you know, what are the advantages that customers might tap? So one thing that was happening, particularly for the last decade or so, is flash was moving on to faster and faster devices. I mean, three d cross point came in memory glass storage was coming in, so one thing that was very apparent Waas You know, this is something that we need to get ready for now. At this point, what has happened is that the price point that you know, these high end devices can be a pain has come where mass consumption can happen. I mean, anybody can actually get a bunch of these obtained drives at a pretty good price point and then put it in their servers and expected performance. I think the important thing is we build some of the architectural pieces that can enable they, uh the, uh enable us to leverage the performance that these devices get. And for that, I think let's start with one of the beginning. Things that we did was make sure that we have things like fine grain metadata so that, you know, you could get things like data locality. So the data that the compute would need but stay in the server that was very important part or one of the key tenets of our platform. And now, as these devices come on, we want to actually access them without going over the next. You know, in the in the very last year, we released a Construct Autonomous Extent store. So which is not only making data local, but also make sure metadata as well, having the ability to actually have hyper convergence where we can actually get data and metadata from the same server. It benefits all of these newer class storage devices because the faster the devices, you wanted to be closer to the compute because the cost of getting to the device actually adds up to the Layton's. He adds up to the application with for the storage in the latest. I would say this the dot Next, What we're announcing is two technologies. One is awful lot store, which is our own user file system. It's a completely user space file system that is available. We're replacing gets before we're all our You know, this drives which will then be in me and beyond on. And we're also announcing SPD K, which is basically a way for accessing these devices from user space. So now, with both of these combine, what we can do is we can actually make an Iot from start to finish all in user space without crossing the Colonel without doing a bunch of memory copies. And that gives us the performance that we need to really get the value out of these. You know, the high end devices and the performance is what our high end applications are looking for. And that is, I think, what the true value that we can add your customs. >>Yes. Oh man, if I If I understand that right, it's really that deconstruction, if you will, of how storage interacts with the application it used to be. It was the scuzzy stack when I used to think about the interface and how far I had to go. And you mentioned that performance and latency is so important here. So I was removing from, you know, what traditionally was disc either externally or internally, moving up to flash, moving up to things like Envy me. I really need to re architect things internally. And therefore, this is this is how you're solving it, creating higher io. Maybe if you could bring us inside. You know, I think high performance Iot and low latency s ap hana was one of the early use cases that that that everyone talked about that we had to re architect. What does this mean for those solutions? Any other kind of key applications that this is especially useful for? >>Yeah, I think all the high end demanding applications talk about smp, Hana allow the healthcare applications. Look at epic meditate. Look at the high end data basis because we already run a bunch of databases, but the highest and databases still are not running on a C. I. I think this technology will enable you know the most demanding oracle or Sequels. Of course, Chris, you know all the analytics applications they will now be running on a CSO. The dream that we had every application, whatever it is, they can run on the C I. A platform that can become a reality. And that is what we're really looking forward to it. So our customers don't have to go to three year for anything. If if if. If it is an application that you want to run a CEO is the best platform for your application that is working what you want. >>Alright, So let me make sure I understand this because while this is a software update, this is leveraging underlying new hardware components that are there. I'm not taking a three year old server on to do this. Can you help understand? You know, what do they need to buy to be able to enable this type of solution? >>So I think the best thing is we already came up with the all envy. Any platform and everything beyond that is software change. Everything that we are is just available on an upgrade. So of course you need a basic platform which actually has the high end devices themselves, which we have hard for a year or so But the good thing about Nutanix is once you upgrade, it's like a Tesla you know you have. But once you get that software upgrade, you get that boosted performance. So you don't need to go and buy new hardware again. As long as you have the required devices, you get the performance just by upgrading it to the new the new version of the air soft. I think that is one of the things that we have done forever. I mean, every time we have upgraded, you will see. Over the years, our performance is increased and very seldom has a pastoral required to change. You know their internal hardware to get the performance. Now, another thing that we have is we support heterogeneous clusters. So on your existing cluster, let's say that you're running on flash and you want to get you all. And maybe you can add nodes, you know, which are all envy me and get the performance on those notes. While these flash can take the non critical pieces which is not requiring you to understand performance but still give you the density off water. VD I are maybe a general server virtualization. While these notes can take into account the highest on databases or highest analytic applications, so the same cluster and slowly expand to actually take this opportunity of applications on >>Yeah, thats this is such an important point We had identified very early on. When you move to HV I. Hopefully, that should be the last time that you need to do a migration any time. Anybody that has dealt with storage moving from one generation to the next or even moving frames can be so challenging. Once you're in that cool, you can upgrade code. You can add new nodes. You can balance things out. So it's such an important point there. UH, you stated earlier. The underlying A OS is now built very much for that hybrid cloud world. You talk about things like clusters that you have now have the announcement with AWS now that they have their bare metal certain service. So do we feel we're getting a balancing out of what's available for customers, whether it's in their own data center in a hosted environment where they have it, or the public cloud to take capabilities like you were talking about with the new storage class? >>Yeah, I think I see most of these public clouds are already providing you, uh, hardware which hasn't being built in which I'm sure in the future we have storage class memory building. So all the enterprise applications that were running on prim with the latency guarantees, you know, with the performance and throughput guarantees can be available in the public cloud, too. And I think that is a very critical thing. Because today, when you lift and shift, one of the biggest problems that all their customers face is when you're in the cloud, you find that enterprise applications are not built for it, so they have to either really protect it or they have to make, you know, using a new cloud native constructs. And in this model, you can use the bare metal service and run the enterprise applications in exactly the same way as you would run in your private data center. And that is a key tell, because now, with this 100 our data mobility framework where we can actually take both storage and applications, you know do lose them a trust public and the private cloud we now have the ability to actually control on application end to end. A customer can choose Now that they want to run it, they don't have to think. Oh, yeah? I have to move to that. Have to be architected. You can choose the cloud and run it in the panel service exactly as you were honoring your private data center. You've been utilizing things like Nutanix clusters. >>Great, well mannered. Last last question I have for you. You know, we really dug down into some of the architectural underpinnings in some of the pieces inside the box. Bring it back up high level, if you would, from a customer standpoint, key things that they should be understanding that Nutanix is giving them with all of these new capabilities. You mentioned the block store and the SPK. >>Yeah, I think for the customer, the biggest advantage is that the platform that they chose for you know, you see, some of virtualization can be used for the most demanding workloads. They're free to use, you know, Nutanix for smp, Hana for high end Oracle databases, Big data validates they can actually use it for all the healthcare apps that I mentioned epic and meditate and at the same time, keep the investment and hardware that they already have. So I think the fact about this Tesla kernel analogy that we always think is so act with Nutanix. I think with the same hardware, uh, investment that they have done with this new architecture. They can actually start leveraging that and utilize it for more and more, you know, demanding workloads. I think that is the key advantages. Without changing your you know, the appliances or your san or your servers, you get the benefit of running the most demanding applications. >>Well, congratulations to you and the team. Thanks so much for sharing all the updates here. Alright. And stay tuned for more coverage from the Nutanix global dot Next digital experience. I'm stew minimum. And as always, Thank you for watching the Cube. >>Yeah, Yeah, yeah, yeah, yeah

Published Date : Sep 9 2020

SUMMARY :

It's the queue And bringing that to the enterprise was actually you know, one of the interesting components there dial I think the Nutanix story was all about Can we get this hyper convergence one of the challenges always out there is, you know, I build a code base with the technology and One thing that we have made sure is that you know, you know, companies like Intel and NVIDIA always standing up with next generation. At this point, what has happened is that the price point that you know, these high end devices So I was removing from, you know, what traditionally was disc either externally I. I think this technology will enable you know the most demanding oracle or Sequels. Can you help understand? I mean, every time we have upgraded, you will see. You talk about things like clusters that you have now have the announcement with AWS that were running on prim with the latency guarantees, you know, Bring it back up high level, if you would, from a customer standpoint, key things that they should be understanding They're free to use, you know, Well, congratulations to you and the team.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

NVIDIAORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DheerajPERSON

0.99+

TeslaORGANIZATION

0.99+

three yearQUANTITY

0.99+

Nanosys BhattacharyaPERSON

0.99+

seven yearsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

IntelORGANIZATION

0.99+

three yearQUANTITY

0.99+

bothQUANTITY

0.99+

OracleORGANIZATION

0.99+

Manosiz BhattacharyyaPERSON

0.99+

two technologiesQUANTITY

0.99+

OneQUANTITY

0.98+

last yearDATE

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

over 10 yearsQUANTITY

0.98+

First timeQUANTITY

0.98+

2020DATE

0.96+

firstQUANTITY

0.96+

one thingQUANTITY

0.96+

one generationQUANTITY

0.96+

a yearQUANTITY

0.95+

This yearDATE

0.93+

last decadeDATE

0.93+

CubeCOMMERCIAL_ITEM

0.9+

100QUANTITY

0.9+

CTITLE

0.9+

eightQUANTITY

0.89+

stew MinutemanPERSON

0.85+

One thingQUANTITY

0.85+

KOTHER

0.82+

GlobalEVENT

0.75+

yearDATE

0.7+

MonoPERSON

0.66+

dotEVENT

0.66+

LaytonORGANIZATION

0.65+

vmc NutanixORGANIZATION

0.63+

88OTHER

0.6+

lastDATE

0.6+

thingsQUANTITY

0.59+

NutanixTITLE

0.57+

smpTITLE

0.56+

suppliersQUANTITY

0.53+

ConstructORGANIZATION

0.52+

Autonomous ExtentTITLE

0.52+

.NEXT Digital ExperienceEVENT

0.51+

IotTITLE

0.51+

HanaORGANIZATION

0.48+

CubeORGANIZATION

0.47+

HanaTITLE

0.46+

kernelTITLE

0.44+

SPKTITLE

0.39+

Guatam Chatterjee, Tech Mahindra & Satyendra Gupta, Gov. of India | AWS Public Sector Partner Awards


 

>> Announcer: From around the globe, it's theCUBE. With digital coverage of AWS Public Sector Partner Awards. Brought to you by Amazon Web Services. >> Hi, I'm Stu Miniman, and welcome back to theCUBE's coverage of the AWS Public Sector Partner Awards. We're going to be digging in. This award is for the most customer obsessed migration and happy to welcome to the program two first time guests coming to us from India. First of all, from the partner with Tech Mahindra, we have Gautam Chatterjee. He is the vice president with Tech Mahindra, who's the winner of the award, and they've brought along their customer for this, that is Satyendra Gupta, who is the director of the CPWD, which is the Central Public Works Department, part of the government of India. Gentlemen, thank you so much for joining us. >> Thank you. >> All right, if we could, let's start with just a quick summary of what your organizations do. Gautam, we'll start with you. Tech Mahindra, I think most of our audience, you know, should be aware, you know, large, very well known organization. Congratulations to you and the team on the win. Tell us what your part of Tech Mahindra does. >> Okay. So, Tech Mahindra is a five billion dollar organization, and it's a part of Mahindra and Mahindra. Which is approximately at $22 billion evaluation worldwide. So, Tech Mahindra is primarily into IT services and consulting services for the information technology and information technology related works across the globe. We have got multiple offices, almost around 90 locations across the country, and we have gotten to operations worldwide in different verticals and different geographies. So, as a part of Tech Mahindra, I manage the central government that is the public sector business for Tech Mahindra, based out of New Delhi, in India. And we handle the complete large public sector organizations and different ministries which are coming into the government of India. >> Wonderful! Satyendra, obviously public works, relatively self explanatory, but, tell us a little bit about your organization, your roll, and, if you could, introduce the project that your group worked with Tech Mahindra on. >> Okay, so, Central Public Works Department is a 165 year old organization that was aided by large technology. In 1854 was when this organization started working. The primary responsibility of this organization is to build the consistent works of the government of India. Primarily in the buildings sector. We see predominantly, Tech Mahindra will see predominantly you aiding the department, is that technical add-on to the government of India regarding these concepts and matters. Right, so, this department is spread across the country, from north, and in the south, Kerala. And from Gujarat in the west to another place in the east. This department has departments across the country. We had to use, with all tech with all the top companies we had thought (indistinct) is that only the building but we created and perfected from the government of India, like, the stadiums. That is not so many, wanted something that would have been very useful regarding the tsunami. Tsunami came so the government, the projects we picked up would be constantly small houses that we'd have to give it to. And CPWD, using the info technology since long, but we have it all along (indistinct) in value. Now, last year, it had been decided that we would implement the IT system in the CPWD very hard softwares and will be implementing a single use form, and everything will be connected to each other, too. So, this is what the internet for part of the implementation is. As far as myself is concerned, I am in charge of the implementation of this year for the system in the department. From it's inception to the end, and detailing the whole of the process until all the onboarding of the Tech Mahindra, and the implementation of. And then, from there after waiting a minute to, in the department to make it adaptable, we tell everybody. These are the roles that I have. >> All right, Gautam, if you could, migration's obviously a big part of what I expect Tech Mahindra is helping customers with. Help frame up, you know, the services that you're doing, talk a little bit, if you could, the underlying AWS component of it, and, you know, specifically, give us a little bit about Tech Mahindra's role in the public works project that we were just talking about. >> Okay. So, coming to the relationship and the journey which you have started for the CPWD project, it's around a year, year and a half work when you have started interacting with CPWD. By understanding their business challenges and the business department, which is primarily automating the whole processes. And there are multiple applications, multiple processes which they wanted to automate. Now, definitely once their automation comes into the picture, you have to take place the complete automation of the applications, the complete automations of the infrastructure, and the complete automations of the UI part of it. That is the user perceptions, user interface, right? So, all three has been covered by this company to automation process. As a part of the system integrations business, our main objective is to plan and bring the respective OEMs, who are the best of the great, our technology providers, to bring them to utilize those platforms, and to utilize those course applications, so that, by utilizing those technologies and applications, we can automate the complete process and provide the complete drill down management view to CPWD for their inter-operations and application. In the process of doing that, what we have done, we have brought in SAP as an ace for HANA implementation, which is the primary business applications which will be implemented in CPWD. The inter-user log-in and user interface will be done through a portal, and that portal will be utilizing the Liferay user portal, which will be the front end user interface. There will be an eTendering application, which will be also through one of my large general partners, who will be working together for us for the eTendering applications, which is also a part of ours, and 40 of the whole automation process. And inter-application, eTendering, the portal, and all the applications, as a matter of fact, will be hosted to the cloud on AWS platform. Now, once you're talking about the AWS platform, that means it will implement the complete infrastructure of the service, and the complete platform as a service. So, all the computed storage, everything will be deploying from the AWS cloud, and necessarily all the platform in terms of your database applications, all third-party tools to do the performance testing, management, monitoring. Everything will be provided as a platform of the service by AWS. So, we, engaged AWS from the beginning itself, the AWS team, and SMP team, both major OEMs worked with us very hand and gloves from day one. And we had multiple interactions with the customer. We understood the challenges. We understood the number of users, number of iterations, number of redundancy, number of high, I mean, the kind of high availability they will require in times of the business difficulty of the applications, and based on which, together, along with AWS, Tech Mahindra, and SAP, all three of us together, and I have the complete solutions, architecture, and the optimizations of the whole solutions, so that overall impact comes to CPWD as the customer, the ultimate results, and the business output they deserve. You know? So, that is where we actually interacted. We have got the interactions with AWS solutions team, AWS architect team, along with our interface architect and the solutions team, who worked very closely along with the customers, them desizing so that it exactly matches the requirement not only for today, down the line for the next four years, because the complete implementation cycle is 18 months, and after that, Tech Mahindra is a prime service provider. We'll provide the four years after implementation support to CPWD, because we all understand that any government department, they need government understanding. These kind of business applications implementation is a transformation. Now, this transformation definitely cannot happen overnight. It has to happen through a process, through a cycle, and through a phase, because there will be the users who will be the proactive users who will start using the inter-applications from the beginning, and, gradually, the more and more success, the more and more user friendliness will come into the whole picture. Then, participation for multiple users, multiple stakeholders will come on board. The moment that comes in, the users load, the user's participation and user's load, both into the platforms, both into the infrastructure will keep on changing, keep on increasing, and that is why our role will be how to manage the complete infrastructure, how to manage the complete platform throughout the journey of this transformation of five and a half years. And that is what the exact role as a prime and large MSP Tech Mahindra will perform for the next five and a half years along with AWS, along with CPWD, and along with SAP. (coughs) >> All right, well, Satyendra, Gautam just laid out, I think, a lot of the reasons why they won the customer obsessed award from AWS on this. You know, I think back to earlier in my career and you talk about NSAP rollout, and it's not only the length of time that it takes for the rollout, and the finance involved, but what Gautam was talking about is the organizational impact and adoptions. So, I would love to hear from your side. You know, what were the goals that you had coming into this? It sounds like getting greater adoption inside the organization for using these services. Give us your insight as to, you know, how that roll-out has been going, the goals you had, how you're meeting them, any success metrics that you use internally to talk about how the project has gone so far. >> We implement the Atlas System in the CPWD, the activities going on since a long time. It was more than one and a half years had been passed, we have angers, one of them concerning our ideas and the way we transform our business processes. They have some certain ideas and that the app implementation is the last one. Most of them have been implemented and we have started, started to get ideas to implement some, but we had bad interactions with all the leading IT service providers in the country, along with all the leading cloud service providers in the country, and this, of course, all the leading EIP services, OEMs, EIP, OEMs, so and so. But, it's a long journey, we have a trial approximately half of the deadline from there. To inform returning process, Tech Mahindra has been appointed as the system integrator and they have come with all the sorts of the services that they are offering, for example, they plan to use SAP, and EIP will be in there, as well. This "one life" system for the portal, eTendering, is a primary credit, has been done. And overall everything has been hosted on the AWS cloud platform. So, it's just that, when could we have. And, everybody knows that Amazon is the leading cloud service provider with the largest of the facilities available with us, so, during this journey, we have got lots of support from the AWS via lots of the credit regarding us to get it set up with the AWS team, and continuously boosted our office and explained each of our queries on this, and now, from the march onwards, Tech Mahindra has started the implementation process we are in. More than four months have been passed since then. And we have covered a lot. The whole objective of this implementation is all our activities will be done on this EIP system, only that if somebody is working in the CPWD, you will activate that. Work in the CPWD on the EIP, or you will not be able to work at all. This is a light goal and whole system. But, all of our system is going to be automatic. Earlier, we were having a different idea because when we were working in the silos, everything we wanted to be integrated with each other, and the time that will be invested to make the entry of the different activity at a different time and with the applications, applicants are not talking to each other, they are working in the silos, but that will go away. So, what we are expecting everything will be on the EIP system, as well, and we are expecting the efficiency of the CPWD unit is going to be increased tremendously. Apart from this, they will handle a more number of the works compared to what they were handling and the time in it since. And everything will revolved around the click of the buttons and we need not to go and ask from anybody to give the reports, et cetera. So, problem management must peak, too. By the click of the button, we will also be able to get all the inputs, all the reports with what is going on across the country. And that idea. So, it is going to be really a transformation to the working of the department, and, in whole, the entire public work center of this country is going to be benefited out of this. This has been like a lighthouse today. This EIP implemented in the CPWD is the lighthouse up ahead, so there are more than 30 public work departments, said public work departments are working, so this is going to create and open a window for everybody there. Once it is a success of this implementation, we'll have it far reaching implication on the implementation of that EIP system or a similar idea for implications in the public works or in the whole country. So, so, there's lots of these stakes our there. To any and, hopefully, with the help of Tech Mahindra, with the help of SAP, AWS, and Amazon, one day they will be able to implement successfully and we will, we are going to get the benefit out of. Everybody is going benefit, not only the Central Public Works Department, but all of our stakeholders. All the stakeholders in terms of businesses, in terms of their reach to the Public Works, and there is a new door to open because the IT had not been leveraged the way in the Public Works Department in the central department or the state government. The other IT system hadn't used EIP. It is going, it's a lighthouse headed to success. We'll have a far reaching implication for everybody. >> Well, I tell you, Satyendra, that's been the promise of cloud, that we should be able to do something, and the scalability and repeatability is something that we should be able to go. Gautam, I want to give you the final word on this. You know, speak to how does cloud, how do we enable this to be able to scale throughout many groups within the organization without having to be, you know, as much work, you know. I think about traditional IT, it's, well, okay, I spend a project, I spend so much time on it, and then every time I need to repeat it, I kind of, you know, have that same amount of work. The, you know, labor should go down as we scale out in a cloud environment. Is that, what you feel, the case? You know, help us understand how this lighthouse account will expand. >> Okay. So, any cloud, you know, have initiative nowadays into any organization. It depends. It primarily benefits in both the ways. Number one, the organization doesn't require to invest up front on the capital expenditure part of it. That's very important. Number two, the organization has got the flexibility to scale up and scale down based on the customer requirements. Within a click of the mouse. It doesn't take any time. Because the inter-positioning of the infrastructure is available with the cloud infrastructure service provider. And, similarly, the scaling of the platforms, that's also available with the cloud infrastructure provider. So, once you do the complete mapping requirement and the sizing for the entire tenure of the project, then the provisioning and deprovisioning is not a matter of time, it can happen with a click of mouse. That's number one. Number two, it's become a challenging activity for any government organization to have their own IT set-up. To manage such a huge, mammoth task of the entire infrastructure, applications, services, troubleshooting, 24 by 7, everything. So, that's not expected from the large government organizations, as such, because that's not their business. Their business is to run the country, their business to run the organization, their business to grow the country's different ideas. And, the IT services organizations, like Tech Mahindra, is there to support those kind of automation processes. And, the platforms which are available on the cloud nowadays, that's the ease of inter-applications, inter-management, monitoring, availability of the entire infrastructure, that makes use of the whole, complete system. So, it all works together. It's not a thing that the system integration organizations already will do the all new reform. It has to happen in synergies. So, application has to work together, infrastructure has to be available together, the management, monitoring has to happen, scaling up, scaling down has to happen, all kinds of updates, upgrades, and badges down the line for the company, continuing of the whole contract has to happen so that the system, once up and running and benefited, it's performing at least for a period of the next five years, as the tenure of the contract, in multiple department happens. Now, what Mr. Gupta was saying, it's very very true that CPWD is the kind of motherly organizations for all public works departments in the country. And, all the public works departments in the country are eagerly looking at this project. Now, it is very important for all of us, not only for Tech Mahindra, Tech Mahindra, SAP, Liferay, and AWS, together, to work and make this project as a success, because it is not a reason that, as a simple customer, this project has to be successful. It's a flexible project for the government of India, and it's been monitored by Didac Lee, the government of India officials, and top ranking bodies on a day in and day out basis, number one. Number two, if we become successful together in this project, there will be an avenue for what Satyendra Gupta has said. That all state PWDs will be open to everybody. They will try and adopt, and they will try and implement a similar kind of system to all the respective states in the country. So it's a huge opportunity in terms of technology enhancement, automations, infrastructure, applications, and moreover, as a service provider, to provide the services to all these bodies, together, which, I feel, it's a huge huge opportunity for all of us together, and we are confident that we will work together, hand in gloves, the way we have done from the day one of this initiative, and we'll take it forward. >> All right, well Satyendra, thank you so much for sharing the details of your project, wish you the best of luck with that going forward. And, Gautam, congratulations again to Tech Mahindra for winning the most customer obsessed migration solution. Thank you both for joining. >> Both: Thank you. >> Thank you very much. >> Thank you very much. >> All right, and thank you for joining. I'm Stu Miniman, this is theCUBE's coverage of AWS Public Sector Partner Awards. Thanks for watching. >> Gautam: Thank you very much. (bright upbeat music)

Published Date : Aug 6 2020

SUMMARY :

the globe, it's theCUBE. First of all, from the and the team on the win. is the public sector and, if you could, introduce the project in the department to make it role in the public works project and 40 of the whole automation process. and it's not only the and the time that will be and the scalability and the management, monitoring has to happen, again to Tech Mahindra of AWS Public Sector Partner Awards. Gautam: Thank you very much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SatyendraPERSON

0.99+

AWSORGANIZATION

0.99+

Satyendra GuptaPERSON

0.99+

GautamPERSON

0.99+

Tech MahindraORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Tech MahindraORGANIZATION

0.99+

Gautam ChatterjeePERSON

0.99+

IndiaLOCATION

0.99+

MahindraORGANIZATION

0.99+

GuptaPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

last yearDATE

0.99+

CPWDORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

LiferayORGANIZATION

0.99+

New DelhiLOCATION

0.99+

GujaratLOCATION

0.99+

Central Public Works DepartmentORGANIZATION

0.99+

Didac LeePERSON

0.99+

1854DATE

0.99+

18 monthsQUANTITY

0.99+

Guatam ChatterjeePERSON

0.99+

SAPORGANIZATION

0.99+

40QUANTITY

0.99+

bothQUANTITY

0.99+

more than 30 public work departmentsQUANTITY

0.99+

More than four monthsQUANTITY

0.99+

BothQUANTITY

0.99+

five billion dollarQUANTITY

0.99+

Central Public Works DepartmentORGANIZATION

0.99+

more than one and a half yearsQUANTITY

0.99+

five and a half yearsQUANTITY

0.98+

165 year oldQUANTITY

0.98+

7QUANTITY

0.98+

this yearDATE

0.98+

HANATITLE

0.98+

eTenderingTITLE

0.98+

KeralaLOCATION

0.98+

four yearsQUANTITY

0.98+

Chhandomay Mandal, Dell EMC | VMworld 2018


 

(upbeat music) >> Live from Las Vegas, it's theCUBE! Covering VMworld 2018. Brought to you by VMware, and its ecosystem partners. >> Hey, welcome back to theCUBE! Our continuing coverage at VMworld 2018, I'm Lisa Martin with my co-host John Troyer. We're very excited to welcome back to theCUBE one of our alumni, Chhandomay Mandal, the director of product marketing at Dell EMC. Chhandomay, it's great to talk to you again! >> Thank you, nice to be here. >> We just seem to do this circuit in Las Vegas. >> Yeah. (laughing) >> So, loads of people here, we last got to speak four months ago at Dell Technologies World, thematically that event about making IT transformation real, about making digital transformation real, security transformation real. Let's talk about IT transformation. Yesterday, Pat Gelsinger talked about you know, the essentialness that customers have to transform IT, it's an enabler of digital transformation, let's talk about what Dell EMC is continuing to help customers do, to transform their IT so they can really get, get on that successful journey to digital transformation. >> Yes, the Dell transformation is key into this digital economy in order to thrive in this new world, right? And, digital transformation is fueled by IT transformation. For us, IT transformation means modernizing the underlying infrastructure, so that they can deliver on scale, performance, availability, cost-effectiveness. They can also automate a lot of the manual processes, and streamline the operations, net result being freeing up the resources, and kind of like, deliver the transformation for not only application processes, but also businesses in general. So, with our portfolio, we are helping customers into this journey and since we talked at Dell Technologies World, it is going great, we are seeing a lot of adoption in this portfolio. >> Chhandomay, I love, you know, you work on high-end storage, right? Which is. >> Yes. >> Which means that these are business-critical applications that you are supporting. >> Absolutely. >> And, that means that they're the most, in some of the ways, some of the most interesting, right? And the deepest and most important, when you're talking digital transformation. But it comes down to, you know, as you say, efficiency and how the IT department is running. In the olden days, you'd get a VMAX, and you'd have an admin, and there's a lot of knobs and adjustments and tuning, and you have to keep that machine running smoothly because they're supporting the enterprise. Now, new next generation PowerMax, some of the, you know, tell us a little about that. What I'm really impressed with is all the automation, and all the efficiency that goes into that platform. >> Absolutely. Absolutely. So, PowerMax is our latest flagship high-end product. It's an end-to-end NVMe design platform, designed to deliver like highest level of performance. Not just performance, but highest level of efficiency, as well as all the trusted data services that are synonymous with VMAX. And, not to talk about the six-nines of availability, all those goodness of the previous generations carried over. But, the key thing is, with PowerMax, what we have done is, if I need to boil it down into three things, this is a very powerful platform. It's simple, and it's trusted. So now, when I talk about very powerful, obviously performance is part and parcel. It is actually the fastest storage array. 10 million IOPS, 150 gigabytes per second, >> It's a maniac, it's a, it's a screamer, it's amazing. >> Et cetera, et cetera, et cetera. >> Yeah yeah, yeah. >> But like that's kind of like a table steak and bread and butter for us. Now, what I want to highlight is, how simple the platform has become. We have a built-in machine learning engine within the platform. And now, instead of like, I need this much of capacity and this much of performance, you can actually provision storage based on the surface levels that you need to give your customers. And we, underneath, will take care of like whatever it means for any workloads you are running. And how are you doing it? So for example, today, right? Most of the applications are still like business applications, like Oracle, SAP, you name it. But, within the digital transformation, a lot of the modern, analytics heavy applications are also coming in, right? So, if I were to break it up it would be say like, 80, 20, to 80% business, 20% modern applications. Now, we are seeing the modern applications getting adopted like higher and higher and-- >> It's going to flip, right? At some point. >> Yes. Like in three to five years, the ratio will be opposite. Now, if you are buying an array like PowerMax today, how can we deliver the performance you need for business applications of today, while taking care of the analytics heavy applications of tomorrow, at the same time, meeting your applications? I mean, meeting your SLS all the way through. And that's where the machine learning engine comes in. It like, takes 40 million data sets in real-time. It makes six billion decisions per day, and, essentially, it figures out from the patterns in the data, how to optimize where to place the load, without the administrators having to like, tune anything, so it's like, extremely simple. Completely automated, thanks to the AI and ML engine. >> Taking advantage of those superpowers, AI, ML, that Pat. >> Yes. >> Talked about yesterday, so you talked about it's efficient, it's fast, trusted. Speaking of trust, Rackspace, long-time partner of Dell EMC and VMware, we actually spoke with them yesterday, Dell EMC and PowerMax particularly, have been really kind of foundational to enabling Rackspace to really accelerate their business, in terms of IT transformation. Talk to us about that in terms of them as a customer. >> So, nice that you bring up, Rackspace, they got a shout-out from Pat yesterday as the leading multi-cloud provider in the managed space, right. Now, if you look at Rackspace, they have like 100,000 plus customers all with various types of needs. Now, with a platform like PowerMax, they are able to simplify their IT environment, reduce a lot of consolidation happening on that dense platform. So they can reduce the footprint a lot of, less power culling. At the end of the day, they're minimizing their operational expenses, simplifying the management, how they manage their infrastructure, monitor their infrastructure. It becomes kind of like, invisible, or self-driving storage. Like, you really like, don't worry about it. You worry about the business, value it, and innovations that IT can bring, for your digital transformation. While the array kind of like, does it own work. A lot of work, no mistake about it. But everything is kind of like, hidden from the admin perspective. Whether you are running Oracle or Splunk, it figures out like what to do. Not only like maintaining the service levels, but as the technology evolves you bring in not just NVMe necessities, but next-generation storage class memory, they are going to automate and do the plasmid by itself. >> Yeah, that's huge, right? Because, and that's where you free up those time and resources, and brain power, frankly, for your IT and group then to be able to work on more strategic projects than tuning this particular data store and LUN or whatever for Splunk and et cetera, right? You've got so much, again, self-driving kind of self-driving storage, there. I also, Chhandomay, I also wanted to talk about the other kind of high-end array in Dell EMC's portfolio, the XtremeIO. And that, you know, all-flash, you can talk a little about that, but you know, what are the use cases there, and when should people be looking at that? And what kind of, what's new in that world? >> Sure. So, PowerMax is the flagship high-end productive spin, like evolved over 30 years, 1,000 plus patents, right? Whereas if you contrast it, XtremeIO is a purpose-built, all-flash array designed to take advantage of the flash media and designed from the ground up. Now, it delivers very high performance with consistently low latency. But, the key innovation there is the way it does in line, all the time, our data services. Especially the data reduction, the content, 800% in memory content, our metadata, helps deliver a new class of copy services so, and then, I mean, it scales modular loots, scale up and scale out. So, the use cases where XtremIO is very efficient is where you need a lot of, I mean you have a lot of common datas, for example VDI, we can offer like, very high data reduction ratios reducing your footprint for VDI type environment. The other use case is active, open data management. So, for example, like for every database, there are probably like eight to 10 copies at a minimum. Now with XtremIO, like you can actually use those copies, same as the production platform, and, cut around workloads on them. Like whether it's like your VIO upload, or like reporting test day of sandboxing. All of those things can be run at the same platform, and like the array will be able to deliver like, without any sweat. >> And as I said, you're doing copy data management sort of thing? >> Yes. >> Yeah, okay that's great. >> Yes, yes, yes. >> Yeah, that's. >> So, customer examples, you know how much I love that. You talked about this really strong example with PowerMax and Rackspace. Give us a great example of a customer using XtremIO X2 that's really enabled with these superpowers to grow their businesses. >> Sure, so at VMware what best can it be saying the customer, in this case will be, guess what? >> VMware. (laughing) >> So, VMware's IT cloud infrastructure team is using XtremIO X2 for their factualized SMP HANA environment. And there are several other workloads in the pipeline. But what I want to highlight is like, what and how they are doing it. So they have their production environment, they are leveraging replication technologies for our tier, and then from that tier, they are making copies, on those copies they are applying the patches, sandboxing, all those things. An exact replica of the production environment. And then, like when they are done, they are rolling it back out to the production. And the entire workflow is kind of like automated, tested, and a great example of, like how they are doing it. But it's not just the copy that are management, there are other aspects to it. So for example, the performance. Now, they started with like a two terabyte VM and they tried to clone both in the traditional storage, and XtremIO. With the traditional storage, it took like 2 1/2 hours. With XtremIO, it was done in like 90 seconds. >> So from two hours to 90 seconds. >> Seconds. >> Is dramatic. >> And, like they ran the data reduction, they can as if. So, for VMware's entire ESX production environment, this is like 1.2 petabyte storage. Now, with XtremIO data reduction technology, they can see that it will be reduced to like, 240 terabyte worth of storage. So, essentially, from three rows of storage, it would be reduced to three racks of XtremIO. So, you can see, these settings in, all over the place. Like, I mean footprint, power cooling management, all of those things. So, that would be my best example of, like, how XtremIO X2 is being used for, I mean, in a transformative way in the IT environment. >> Well it kind of goes along with one of the things that Pat Gelsinger talked about yesterday from VMware's perspective is, I think that the stat was, they've been able to reduce CO2 emissions by 540 million tons. Sounds like XtremIO might be, want to be, invisible. >> Yeah, of course. >> Facilitators. >> Yeah, yeah. Like we are contributing a lot in that. And I mean, at the end of the day, this is, like, what digital transformation is about right? So like, absolutely, yes. >> That's great, Chhandomay, I mean, the, I would love to have a problem. I would love to have a problem that required running, you know, hot on XtremIO because I think those are super interesting problems. And the fact that you can, you know, actually turn those huge data sets into something that's actually manageable and, I can envision three racks, I can't really envision, half a data center's worth of spinning discs, so, that's amazing. I love the fact that the engineering that goes into these high-end systems that you, on your, on the team, there. >> Yeah, so the one other thing I wanted to mention was the future-proof loyalty program. >> Yeah we've heard a little bit about that, tell us. >> Yes, so, this is essentially for our customers three things, like one is peace of mind. You know like what you are getting, there are no surprises. The second thing is investment protection. And then the third would be like (mumbles). So, there are like several components to it. And, like, it is not only like for XtremIO or PowerMax, it's pretty much like for the portfolio there is a list. Like, of what is part of it, and it's continually growing. Now for XtremIO and PowerMax purpose is the important things of asking for like if it's a three year warranty, and then like tier pricing, they know, like, exactly like what they are going to pay for support today as well as when maintenance renewal comes up. Then, (mumbles) migrations. So, back from exchange, right? Like with XtremIO to the next-generation PowerMax to PowerMax dot next, but like, they are covered with non-disruptive migration plans, storage efficiencies. And the last two things that we added they truly like we have announced that VMware is cloud-enabled. And cloud conception models, so like, I mean, as Michael says, cloud is not a place it's an operating model. So even with XtremIO and PowerMax, customers can pay for what they're using, and then, like, it's called flex on-demand. And they use, I mean when they use the buffer space, they can pay for that. And then with CloudIQ, we can monitor the storage areas from the cloud. It's the storage analytics, so it's cloud-enabled as well. So it covered pretty much like, all of the things Pat talked about yesterday. >> Fantastic, well I'm going to go out on a limb. Yesterday, I've asked a number of folks, what would you describe, I asked Scott Delandy, the superpower of certain technologies. And what I'm getting from this is trust. Like, the Trustinator, so, maybe that? Can you make a sticker by the time we get to Dell Technologies World next year? >> Oh yeah, absolutely, yeah. >> Chhandomay, awesome. Great to have you back on theCUBE, >> Thank you. >> Thank you so much for sharing all the excitement what's going on. We'll talk to you next time. We want to thank you for watching theCUBE, for John Troyer, my co-host, I'm Lisa Martin. We are live at VMware with day two from the Mandalay Bay Las Vegas. Stick around, John and I will be right back with our next guest. (upbeat music)

Published Date : Aug 28 2018

SUMMARY :

Brought to you by VMware, and its ecosystem partners. Chhandomay, it's great to talk to you again! So, loads of people here, we last got to speak They can also automate a lot of the manual processes, Chhandomay, I love, you know, you work applications that you are supporting. And the deepest and most important, But, the key thing is, with PowerMax, It's a maniac, it's a, Et cetera, et cetera, the surface levels that you need to give your customers. It's going to flip, right? from the patterns in the data, Taking advantage of those superpowers, Talked about yesterday, so you talked about but as the technology evolves you bring in And that, you know, all-flash, of the flash media and designed from the ground up. So, customer examples, you know how much I love that. (laughing) So for example, the performance. So, you can see, these settings in, all over the place. Well it kind of goes along with one of the things And I mean, at the end of the day, And the fact that you can, you know, Yeah, so the one other thing I wanted to mention And the last two things that we added they truly like Like, the Trustinator, so, maybe that? Great to have you back on theCUBE, We'll talk to you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Pat GelsingerPERSON

0.99+

MichaelPERSON

0.99+

John TroyerPERSON

0.99+

Chhandomay MandalPERSON

0.99+

40 millionQUANTITY

0.99+

ChhandomayPERSON

0.99+

JohnPERSON

0.99+

80QUANTITY

0.99+

threeQUANTITY

0.99+

Las VegasLOCATION

0.99+

90 secondsQUANTITY

0.99+

two hoursQUANTITY

0.99+

Scott DelandyPERSON

0.99+

800%QUANTITY

0.99+

20%QUANTITY

0.99+

yesterdayDATE

0.99+

VMwareORGANIZATION

0.99+

eightQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

five yearsQUANTITY

0.99+

RackspaceORGANIZATION

0.99+

20QUANTITY

0.99+

1,000 plus patentsQUANTITY

0.99+

2 1/2 hoursQUANTITY

0.99+

1.2 petabyteQUANTITY

0.99+

80%QUANTITY

0.99+

next yearDATE

0.99+

two terabyteQUANTITY

0.99+

thirdQUANTITY

0.99+

240 terabyteQUANTITY

0.99+

YesterdayDATE

0.99+

540 million tonsQUANTITY

0.99+

VMworld 2018EVENT

0.99+

PatPERSON

0.99+

three thingsQUANTITY

0.99+

six-ninesQUANTITY

0.99+

tomorrowDATE

0.99+

10 copiesQUANTITY

0.99+

PowerMaxORGANIZATION

0.99+

todayDATE

0.99+

over 30 yearsQUANTITY

0.99+

ESXTITLE

0.99+

four months agoDATE

0.98+

100,000 plus customersQUANTITY

0.98+

oneQUANTITY

0.98+

Dell Technologies WorldORGANIZATION

0.98+

three racksQUANTITY

0.98+

OracleORGANIZATION

0.98+

six billion decisionsQUANTITY

0.97+

bothQUANTITY

0.97+

three rowsQUANTITY

0.96+

DellORGANIZATION

0.96+

SMP HANATITLE

0.96+

XtremIO X2TITLE

0.96+

day twoQUANTITY

0.96+

Mandalay Bay Las VegasLOCATION

0.95+

second thingQUANTITY

0.95+

10 million IOPSQUANTITY

0.94+

two thingsQUANTITY

0.93+

XtremIOTITLE

0.93+

SplunkORGANIZATION

0.92+

theCUBE Video Report Exclusive | SAP Sapphire Now 2018


 

welcome to the cube I'm Lisa Martin with Keith Townsend and we are in Orlando sa piece sapphire now 2018 we're very proud to be in the NetApp booth now that sa very long standing partnership with sa PA welcome to Cuba thank you we're so glad you guys are here over a million people are expected to engage with the SH the experience both in person and online that's enormous yes sa P is the cash register of the world 70% of the world's transactions go through si people most of us don't see it a lot of the SI p products like Hybris like Arriba success factors are built on meta meta is 26 years young now and has undergone a big transformation from traditional storage company to more cloud we're gonna be now that data management company for hybrid clouds every customer has a different rate of motion to the cloud that's why we have to spend an awful lot of time listening to our customers don't and then talkative the c-level executives in the business side to say what are your what are your expectations about the technology right whether if the reduction of labor improved quality again overall equipment effectiveness and help them understand what the treaty chuckles on choice we're hearing for customers is I need choice I need to move my data around on-prem into whatever hyper hyper scalar environment you want fast efficient with analytics readouts everybody looks at their phone when we make a deposit we expect to see that deposit instantaneously right the business needs to operate just as instantaneously and a company like NetApp could build this data fabric to connect them seamlessly so that the customers have choice it's interaction of sensors and to way taking IOT data in and then also feeding it back into signals but that's part of the interface of the software people can deploy much more effectively with a lower skill set right so there's not that hurdle really allows the administrators to configure dream workspace where you can get all the data that you need to work with in one place takes all that noise and makes it into one screen so that you can just simply make and change the data the way you would expect to on a spreadsheet sa P is serious about this C for Hana move of being able to say you know what we are going to create an ecosystem of truck if you have a developer and your enterprise and you say you know what I'm a big sa p user but I actually want to develop a custom map or are there some things I might do then s ap makes available to Leonardo a machine learning foundation and you can take advantage of that and develop a customized again not just a products company but an ecosystem company on C sapphire in Orlando is a great example of how they're expanding the brand is that say P can't do everything so we work with a lot of specialists we were critiques we couldn't do this without hardware partners with storage Annette app has proven you know to be one of those partners that could deal with a myriad of data types from a myriad of applications that forces the stretch into voice recognition that voices the data mining and data analytics and the like augmented intelligence to augment humanity this connection of humans and machines working together they're doing all this genomic research personalized medicine for cancer patients throughout Europe using Hana I even know about it public safety if you could think about that that's a big thing to focus on thinking about using drones for first responders smart farming throughout all the Netherlands reducing pesticide use water usage dramatically down and they increased yields by 10% helping customers change their business change industries save lives pretty cool stuff yeah SAV has a little ways to go yet that that's kind of you talk to any HDI customer validated and certified for Hana is a bad word today but s ap understands it in their there they're moving to certify the pot platform for HDI so I thought that was a great example of them listening to customers and continuing to transform over the years we'd love to hear you know from customers hey can I eat with a buddy could I put this object you know on that object together and build a process basically there's almost everywhere place where the net up product will fit but again we have to make session where's the place to start step back and look at what perhaps other competitors have done in their space or in completely different industries are compared to making great content the cute makes great content that content would be found people will take notice you make a great product that impacts people's lives it's no wonder that s ap is near the top of that brand recognition brand value seventeenth on the list so if you want to become a leader or a thought leader in your own specific industry join the SMP HANA community make the investments in SP Leonardo work with SP work with net after and like Bill says let's get it done thank you all for being here we're a static for having the cube in our booth Lisa Martin with Keith Townsend on the cube from the net out booth at SVP sapphire now 2018 thanks for watching [Music]

Published Date : Jul 7 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

OrlandoLOCATION

0.99+

Keith TownsendPERSON

0.99+

2018DATE

0.99+

CubaLOCATION

0.99+

BillPERSON

0.99+

EuropeLOCATION

0.99+

ArribaORGANIZATION

0.99+

10%QUANTITY

0.99+

NetherlandsLOCATION

0.99+

26 yearsQUANTITY

0.99+

seventeenthQUANTITY

0.98+

one placeQUANTITY

0.97+

sa PAORGANIZATION

0.97+

HybrisORGANIZATION

0.97+

oneQUANTITY

0.96+

one screenQUANTITY

0.96+

NetAppORGANIZATION

0.95+

Lisa MartinPERSON

0.94+

over a million peopleQUANTITY

0.94+

HanaORGANIZATION

0.88+

todayDATE

0.87+

bothQUANTITY

0.86+

SVPORGANIZATION

0.8+

SMPORGANIZATION

0.79+

SPORGANIZATION

0.78+

AnnetteTITLE

0.74+

first respondersQUANTITY

0.74+

HanaTITLE

0.74+

70% of the world's transactionsQUANTITY

0.7+

sapphireORGANIZATION

0.7+

SAPORGANIZATION

0.68+

theCUBEORGANIZATION

0.67+

PORGANIZATION

0.67+

SP LeonardoORGANIZATION

0.66+

myriad of applicationsQUANTITY

0.6+

HDIORGANIZATION

0.56+

LeonardoORGANIZATION

0.54+

dataQUANTITY

0.51+

thoseQUANTITY

0.5+

SapphireTITLE

0.45+

HANATITLE

0.44+

Irfan Khan, SAP | SAP SapphireNow 2016


 

>> Voiceover: It's theCUBE covering Sapphire Now. Headlines sponsored by SAP HANA Cloud, the leader in platform as a service. With support from Console Inc., the cloud internet company. Now, here are your hosts: John Furrier and Peter Burris. >> Okay, welcome back, everyone. We are here live in Orlando, Florida, for exclusive coverage of SAP Sapphire Now. This is theCUBE's SiliconANGLE's flagship program. We go out to the events and extract the signal from the noise. I'm John Furrier, Peter Burris. I want to thank our sponsors for allowing us to get down here, SAP HANA Cloud Platform, Console Inc., Capgemini, and EMC, thanks so much for supporting us. Our next guest is Ifran Khan, who is the SVP General Manager of digital enterprise platforms which includes HANA, end-to-end. Welcome back to theCUBE. >> Thank you. >> John: Good to see you. >> Lovely to be back here again. >> John: So, you know theCUBE history. We go way back, we've done pretty much every Hadoop World up until 2013, now we have an event the same day, week Estrada, New York, NSV, and we've been to every Sapphire since 2010 except for 2014, 2015. We had a little conflict of events, but it's been great. It's been big data. I remember Bill McDermott got up there when HANA was announced, kind of, or pre-built before Hadoop hit. So, you had HANA coming out of the oven, Hadoop hits the scene, Hadoop gets all the press, HANA's now rolling, so then you roll forward to four more years, we're here. What's your take on this, because it's been an interesting shift. Hadoop, some are saying, is hard to use, total costs of ownership. Now, HANA's rising, Hadoop is sliding. That's my opinion, but what's your opinion? >> Well, that's a well, sort of, summarized history lesson there, so to speak. Well, firstly, great to be on theCUBE again. It's always lovely to see you gentlemen here, you do a wonderful job. What I'd perhaps just highlight is maybe some of they key milestones that I've observed over the last four or five years. Ironically, 2010 when I arrived at SAP, when the entire, sort of if you like, trajectory of HANA started going in that direction, and Hadoop was sort of there, but it was maybe petering out a little bit because it was the unknown, the uncertainty of scale in whether or not this is going to be only batch or whether it's going to ever become real-time. So, I would maybe make the two or three milestones from the SAP side. HANA started off as a disruptive technology, which was perhaps conceived as being a response to a lot of internal challenges that we were running into using the systems of record of yester-era. They were incapable of dealing with SAP applications, incapable of giving us what we now refer to as a digital core, and that were incapable of giving our customers truly what they needed. As a response, HANA was introduced into the market, but it wasn't limited in scope to the, if you like the historical baggage of the relational era, or even the Hadoop era, so to speak. It was completely new imagined technologies built around in-memory computing, a columnar architecture, and therefore it gave us an opportunity to project ultimately what we could achieve with this as a foundation. So, HANA came into the market focusing on analytics to start with, going full circle into being able to do transactionality, as well, and where we are today? I think Hadoop is now being recognized, I would say probably as a de facto data operating system. So, HDFS is a very significant sort of extension to most IT organizations, but it's still lacking the compute capabilities. This is what's given their eyes a spark, and of course with HANA, HANA isn't, within itself, a very significant computing engine. >> John: And Vora. And Vora a-- >> Ifran: Of course, and Vora, as well. Now you're finishing off my sentences. Thank you. >> (laughs) This is what theCUBE is all about, we got a good cadence going here. Alright, so but now the challenge. HANA's also, by the way, was super fast when it came out, but then it didn't really fire in my opinion. It's swim-lane. It seems now, it's so clear that the fruit is coming off the tree, now. You're seeing it blossom beautifully. You got S/4 HANA, you got the core... Explain that because people get confused. Am I buying HANA Cloud, am I buying HANA Cloud Platform? Share how this is all segmented to the buyer, to the customer, to the customer. >> Sure, I mean firstly, SAP applications need to have a system of record. HANA is a system of record. It has a database capability, but ultimately HANA is not just a database. It's an entire platform with integration, and application services, and, of course, with data services. Now, as a consequence, when we talk about the HANA Cloud Platform, this is taking HANA as a core technology, as a platform, embedding it inside of a cloud deployment environment called a HANA Cloud Platform. It gives on opportunity where customers are perhaps implementing on premise S/4, or even in a public S/4 instance, an opportunity to extend those applications as perhaps they may need or require to do so for their business requirements. So, in layman's terms, you have a system of record requirement with SAP applications, that is HANA. It is only HANA now in the case of S/4. And in order to extend the application as customers want to customize those applications, there is one definitive extension venue, and that's called the HANA Cloud Platform. >> John: And that mainly is for developers, too. I call it the developer cloud, for lack of a better description or a more generic one. That's the cloud foundry. Basically the platform is a service that is actually bolting on, I guess a developer on-ramp, if you will. Is that a safe way to look at it? >> Ifran: Yeah, I mean I think the developer interaction point with SAP now certainly becomes HCP, but it also is a significant ecosystem enabler, as well. Only last week, or week-before-last in fact, we announced the relationship with Apple, which is a phenomenal extension of what we do with business applications, and HCP is the definitive venue for the Apple relationship in effect. >> So, tell us a little bit about borrowing or building upon that. What is increasingly... How should an executive, when I think about digitalization, how should they think about it? Is this something that is a new set of channels, or the ability to reach new customers, or is there something for fundamental going on here? Is it really about trying to translate more of your business into data in a way that it's accessible so it can be put to use and put to work in more and different ways? >> Sure, it's a great question. So, what is digitalization? Well, firstly, it's not new. I mean, SAP didn't invent digitalization, but I think we know a fair bit about where digitalization is going to take many businesses in the next three to five years. So, I would say that there's five prevailing trends that are fueling the need to go digital. The first thing is about hyperconnectivity. If we understand that data and information is not only just consumed, it's created in a variety of places, and geographically just about anywhere now is connected. I mean, in fact, I read one statistic that 90 percent of the world's inhabitable land masses have either cellular or wireless reception. So, truly, we're hyperconnected. The second thing is about the scale of the cloud, right? The cloud gives us compute, not just on the desktop, but anywhere; and by definition of anywhere, we're saying if you have a smart appliance at an edge, that is, in fact, supercomputing because it gives you an extension to be able to get to any compute device. And then you've got cloud, and on top of which, you have cyber-security, and a variety of other things like IOT. These things are all fueling the need to become digitally aware enterprises, and what's ultimately happening is that business transformation is happening because somebody without any premises, without any assets, comes along and disrupts a business. In fact, one study from Capgemini and, of course, from MIT back in 2013, was revealing that in the year 2,000 and 20, 2020 rather, out of the SMP 500, approximately 40 percent of the businesses are going to cease to exist. For the simple reason, those business transformations that are going on disrupting their classical business models are going to change the way that they operate. So, I would just, in a concatenated way of answering your question, digital transformation at the executive level is about, not just surviving, it's about thriving. It's about taking advantage of the digital trends. It's about making sure that, as you reinvent your businesses, you're not just looking at what you do today. You're always looking at that as a line that's been deprecated. What are you going to do in addition to that? That's where your growth is going to come from, and SAP's all about helping customers become digitally aware and transform their organizations. >> Paul: So, you're having conversations with customers all the time about the evolution of data management technologies, and your argument being is that HANA is more advanced, a columnar database in memory, speed, more complexity in the IO, all kinds of wonderful things that it makes possible can then be reflected in more complex, or more rich, value creating applications. But, the data is often undervalued. >> Ifran: Of course. >> The data itself. We haven't figured out how to look at that data, and start treating it literally as capital. We talk about a business problem, we talk about how much money we want to put there, how much people we want to put there, but we don't yet talk about how much data is going to be required either to go there and make it work, or that we're going to capture out of it. How are you working with customers to think that problem through? Are they thinking it through differently in your experience? >> Yeah, that's a great question. So, firstly, if I was to look at their value association with data, we can borrow from the airline industry perhaps as an analogy. If you look at data, it's very equivalent to passengers. The businesses that we typically operate on are working on first and business class data. They've actually made significant investments around how to securely store, access, process, manage all of this business class and first class data. But, there's an economy class of data which is significant and very pervasive, and if you look at it from the airline's point of view, an economy class individual passenger doesn't really equate to an awful lot, but if you aggregate all the economy class passengers, it's significant. It's actually more than your business and first class revenue, so to speak. So, consequently, large organizations have to start looking at data, monetizing the data, and not ignoring all of the noise signals that come out of the sensors, out of the various machinery, and making sure that they can aggregate that data, and build context around it. So, we have to start thinking along those ways. >> John: Yes, I love that analogy, so good. But, let's take that one step further. I want to make sure I go on the right plane, right? So, one, that's the data aware. So, digital assets is the data, so evaluation techniques come into play, but having a horizontally traversal data plane really, in real time, is a big thing because, not only do I go through security, put my shoes through, my laptop out, that's just IT. The plane is where the action is. I want to be on the right plane. That's making data aware, the alchemy behind it, that's the trick. What's your thoughts on that because this is a cutting area. You hear AI ontolgies and stuff going on there now, machine learning, certainly. Surely not advancing to the point where it's really working yet. It's getting there, but what's your thoughts on all this? >> Yeah, so I think the vehicle that you're referring to, whether it's a plane or whatever the mode of transportation is, at a metaphor level, we have to understand that there is a value in association with making decisions at the right time when you have all the information that you need, and by definition, we have created a culture in IT where we segregate data. We create this almost two swim lane approach. This is my now data, this is my transactional data, and here's my data that will then feed into some other environment, and I may look to analyze it after the event. Now, getting back to the HANA philosophy from day one, it was about creating a simplified model where you can do live analytics on transactional data. This is a big, significant shift. So, using your aircraft analogy, as I'm on there, I don't want to suddenly worry about I didn't pick up my magazine from Duty Free or whatever, from the newspaper stand. I've got no content now, I can't do anything. Alright, for the next nine hours, I'm on a plane now and I've got nothing to do. I've got no internet, I've got no connectivity. The idea is that you want to have all of the right information readily available and make real time decisions. That calls for simplified architectures all about HANA. >> We're getting the signal here. I know you're super busy. Thanks so much for coming on theCUBE. I want to get one final question in. What's your vision around your plans? I'll say it's cutting-edge, you get a great area, ecosystem's developing nicely. What's your goals for the next year? What are you looking to do? What are your key KPI's? What are you trying to knock down this year? What's your plans? >> I mean, first and foremost, we've spent an awful lot of time talking about SAP transformations and around SAP customer landscape transformations. S/4 is all about that. That is a digital core. The translation of digital core to SAP should not be inhibiting other customers who don't have an SAP transaction or application foundation. We want to be able to take SAP to every single platform usage out there and most customers will have a need for HANA-like technology. So, the top of my agenda is let's increase the full use requirements and actual value of HANA, and we're seeing an awful lot of traction there. The second thing is, we're now driving towards the cloud. HCP is the definitive venue not just for the ecosystem, for the developer and also for the traditional SAP customers, and we're going to be promoting an awful lot more exciting relationships, and I'd love to be able to speak to you again in the future about how the evolution is taking place. >> John: We wish we had more time. You're a super guest, great insight. Thank you for sharing the data here >> Ifran: Thank you for having me. >> John: On theCUBE. We'll be right back with more live coverage here inside the cube at Sapphire Now. You're watching theCUBE. (techno music) (calm music) >> Voiceover: There'll be millions of people in the near future that want to be involved in their own personal well-being and well--

Published Date : May 19 2016

SUMMARY :

the leader in platform as a service. We go out to the events and extract an event the same day, or even the Hadoop era, so to speak. John: And Vora. and Vora, as well. that the fruit is coming and that's called the HANA Cloud Platform. I call it the developer cloud, and HCP is the definitive venue or the ability to reach new customers, that are fueling the need to go digital. all the time about the evolution is going to be required either and not ignoring all of the noise signals So, digital assets is the data, at the right time when you have all We're getting the signal here. HCP is the definitive venue Thank you for sharing the data here here inside the cube at Sapphire Now.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VoraPERSON

0.99+

JohnPERSON

0.99+

PaulPERSON

0.99+

Ifran KhanPERSON

0.99+

AppleORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

twoQUANTITY

0.99+

IfranPERSON

0.99+

John FurrierPERSON

0.99+

2014DATE

0.99+

Irfan KhanPERSON

0.99+

2013DATE

0.99+

2015DATE

0.99+

Bill McDermottPERSON

0.99+

HANATITLE

0.99+

2010DATE

0.99+

Console Inc.ORGANIZATION

0.99+

next yearDATE

0.99+

last weekDATE

0.99+

EMCORGANIZATION

0.99+

HANA Cloud PlatformTITLE

0.99+

S/4TITLE

0.99+

CapgeminiORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

SAPORGANIZATION

0.99+

second thingQUANTITY

0.99+

todayDATE

0.98+

one final questionQUANTITY

0.98+

MITORGANIZATION

0.98+

firstQUANTITY

0.97+

HadoopTITLE

0.97+

2016DATE

0.97+

HANA CloudTITLE

0.97+

oneQUANTITY

0.97+

approximately 40 percentQUANTITY

0.96+

firstlyQUANTITY

0.96+

one studyQUANTITY

0.96+

four more yearsQUANTITY

0.96+

three milestonesQUANTITY

0.95+

five prevailing trendsQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

five yearsQUANTITY

0.93+

one statisticQUANTITY

0.92+

this yearDATE

0.91+

SAP HANA CloudTITLE

0.91+

first thingQUANTITY

0.91+

day oneQUANTITY

0.9+

2020DATE

0.9+