Image Title

Search Results for S3 Buckets:

Loris Degioanni | AWS Startup Showcase S2 Ep 1 | Open Cloud Innovations


 

>>Welcoming into the cubes presentation of AWS startup showcase open cloud innovations. This is season two episode one of the ongoing series covering exciting hot startups from the AWS ecosystem. Today's episode. One of season two theme is open source community and the open cloud innovations. I'm your host, John farrier of the cube. And today we're excited to be joined by Loris Dajani who is the C T O chief technology officer and founder of cystic found that in his backyard with some wine and beer. Great to see you. We're here to talk about Falco finding cloud threats in real time. Thank you for joining us, Laura. Thanks. Good to see you >>Love that your company was founded in your backyard. Classic startup story. You have been growing very, very fast. And the key point of the showcase is to talk about the startups that are making a difference and, and that are winning and doing well. You guys have done extremely well with your business. Congratulations, but thank you. The big theme is security and as organizations have moved their business critical applications to the cloud, the attackers have followed. This is Billy important in the industry. You guys are in the middle of this. What's your view on this? What's your take? What's your reaction? >>Yeah. As we, as a end ecosystem are moving to the cloud as more and more, we are developing cloud native applications. We relying on CACD. We are relying on orchestrations in containers. Security is becoming more and more important. And I would say more and more complex. I mean, we're reading every day in the news about attacks about data leaks and so on. There's rarely a day when there's nothing major happening and that we can see the press from this point of view. And definitely things are evolving. Things are changing in the cloud. In for example, Cisco just released a cloud native security and usage report a few days ago. And the mundane things that we found among our user base, for example, 60, 66% of containers are running as rude. So still many organizations adopting a relatively relaxed way to deploy their applications. Not because they like doing it, but because it tends to be, you know, easier and a little bit with a little bit less ration. >>We also found that that 27% of users unnecessary route access in the 73% of the cloud accounts, public has three buckets. This is all stuff that is all good, but can generate consequences when you make a mistake, like typically, you know, your data leaks, no, because of super sophisticated attacks, but because somebody in your organization forgets maybe some data on it on a public history bucket, or because some credentials that are not restrictive enough, maybe are leaked to another team member or, or, or a Gita, you know, repository or something like that. So is infrastructures and the software becomes a let's a more sophisticated and more automated. There's also at the same time, more risks and opportunities for misconfigurations that then tend to be, you know, very often the sewers of, of issues in the cloud. >>Yeah, those self-inflicted wounds definitely come up. We've seen people leaving S3 buckets open, you know, it's user error, but, you know, w w those are small little things that get taken care of pretty quickly. That's just hygiene. It's just discipline. You know, most of the sophisticated enterprises are moving way past that, but now they're adopting more cloud native, right. And as they get into the critical apps, securing them has been challenging. We've talked to many CEOs and CSOs, and they say that to us. Yeah. It's very challenging, but we're on it. I have to ask you, what should people worry about when secure in the cloud, because they know is challenging, then they'll have the opportunity on the other side, what are they worried about? What do you see people scared of or addressing, or what should I be worried about when securing the cloud? >>Yeah, definitely. Sometimes when I'm talking about the security, I like to compare, you know, the old data center in that the old monolithic applications to a castle, you know, in middle aged castle. So what, what did you do to protect your castle? You used to build very thick walls around it, and then a small entrance and be very careful about the entrance, you know, protect the entrance very well. So what we used to doing that, that data center was protect everything, you know, the, the whole perimeter in a very aggressive way with firewalls and making sure that there was only a very narrow entrance to our data center. And, you know, as much as possible, like active security there, like firewalls or this kind of stuff. Now we're in the cloud. Now, it's everything. Everything is much more diffused, right? Our users, our customers are coming from all over the planet, every country, every geography, every time, but also our internal team is coming from everywhere because they're all accessing a cloud environment. >>You know, they often from home for different offices, again, from every different geography, every different country. So in this configuration, the metaphor data that they like to use is an amusement park, right? You have a big area with many important things inside in the users and operators that are coming from different dangerous is that you cannot really block, you know, you need to let everything come in and in operate together in these kinds of environment, the traditional protection is not really effective. It's overwhelming. And it doesn't really serve the purpose that we need. We cannot build a giant water under our amusement park. We need people to come in. So what we're finding is that understanding, getting visibility and doing, if you Rheodyne is much more important. So it's more like we need to replace the big walls with a granular network of security cameras that allow us to see what's happening in the, in the different areas of our amusement park. And we need to be able to do that in a way that is real time and allows us to react in a smart way as things happen because in the modern world of cloud five minutes of delay in understanding that something is wrong, mean that you're ready being, you know, attacked and your data's already being >>Well. I also love the analogy of the amusement park. And of course, certain rides, you need to be a certain height to ride the rollercoaster that I guess, that's it credentials or security credentials, as we say, but in all seriousness, the perimeter is dead. We all know that also moats were relied upon as well in the old days, you know, you secure the firewall, nothing comes in, goes out, and then once you're in, you don't know what's going on. Now that's flipped. There's no walls, there's no moats everyone's in. And so you're saying this kind of security camera kind of model is key. So again, this topic here is securing real time. Yeah. How do you do that? Because it's happening so fast. It's moving. There's a lot of movement. It's not at rest there's data moving around fast. What's the secret sauce to making real identifying real-time threats in an enterprise. >>Yeah. And in, in our opinion, there are some key ingredients. One is a granularity, right? You cannot really understand the threats in your amusement park. If you're just watching these from, from a satellite picture. So you need to be there. You need to be granular. You need to be located in the, in the areas where stuff happens. This means, for example, in, in security for the clowning in runtime, security is important to whoever your sensors that are distributed, that are able to observe every single end point. Not only that, but you also need to look at the infrastructure, right? From this point of view, cloud providers like Amazon, for example, offer nice facilities. Like for example, there's CloudTrail in AWS that collects in a nice opinionated consistent way, the data that is coming from multiple cloud services. So it's important from one point of view, to go deep into, into the endpoint, into the processes, into what's executing, but also collect his information like the cultural information and being able to correlate it to there's no full security without covering all of the basics. >>So a security is a matter of both granularity and being able to go deep and understanding what every single item does, but also being able to go abroad and collect the right data, the right data sources and correlated. And then the real time is really critical. So decisions need to be taken as the data comes in. So the streaming nature of security engines is becoming more and more important. So the step one of course, security, especially cost security, posture management was very much let's ball. Once in a while, let's, let's involve the API and see what's happening. This is still important. Of course, you know, you need to have the basics covered, but more and more, the paradigm needs to change to, okay, the data is coming in second by second, instead of asking for the data manually, once in a while, second by second, there's the moment it arrives. You need to be able to detect, correlate, take decisions. And so, you know, machine learning is very important. Automation is very important. The rules that are coming from the community on a daily basis are, are very important. >>Let me ask you a question, cause I love this topic because it's a data problem at the same time. There's some network action going on. I love this idea of no perimeter. You're going to be monitoring anything, but there's been trade offs in the past, overhead involved, whether you're monitoring or putting probes in the network or the different, there's all kinds of different approaches. How does the new technology with cloud and machine learning change the dynamics of the kinds of approaches? Because it's kind of not old tech, but you the same similar concepts to network management, other things, what what's going on now that's different and what makes this possible today? >>Yeah, I think from the friction point of view, which is one very important topic here. So this needs to be deployed efficiently and easily in this transparency, transparent as possible, everywhere, everywhere to avoid blind spots and making sure that everything is scheduled in front. His point of view, it's very important to integrate with the orchestration is very important to make use of all of the facilities that Amazon provides in the it's very important to have a system that is deployed automatically and not manually. That is in particular, the only to avoid blind spots because it's manual deployment is employed. Somebody would forget, you know, to deploy where somewhere where it's important. And then from the performance point of view, very much, for example, with Falco, you know, our open source front-end security engine, we really took key design decisions at the beginning to make sure that the engine would be able to support in Paris, millions of events per second, with minimal overhead. >>You know, they're barely measure measurable overhead. When you want to design something like that, you know, that you need to accept some kind of trade-offs. You need to know that you need to maybe limit a little bit this expressiveness, you know, or what can be done, but ease of deployment and performance were more important goals here. And you know, it's not uncommon for us is Dave to have users of Farco or commercial customers that they have tens of thousands, hundreds of thousands of machines. You know, I said two machines and sometimes millions of containers. And in these environments, lightweight is key. You want death, but you want overhead to be really meaningful and >>Okay, so a amusement park, a lot of diverse applications. So integration, I get that orchestration brings back the Kubernetes angle a little bit and Falco and per overhead and performance cloud scale. So all these things are working in favor. If I get that right, is that, am I getting that right? You get the cloud scale, you get the integration and open. >>Yeah, exactly. Any like ingredients over SEP, you know, and that, and with these ingredients, it's possible to bake a, a recipe to, to have a plate better, can be more usable, more effective and more efficient. That may be the place that we're doing in the previous direction. >>Oh, so I've got to ask you about Falco because it's come up a lot. We talked about it on our cube conversations already on the internet. Check that out. And a great conversation there. You guys have close to 40 million plus million downloads of, of this. You have also 80 was far gate integration, so six, some significant traction. What does this mean? I mean, what is it telling us? Why is this successful? What are people doing with Falco? I see this as a leading indicator, and I know you guys were sponsoring the project, so congratulations and propelled your business, but there's something going on here. What does this as a leading indicator of? >>Yeah. And for, for the audience, Falco is the runtime security tool of the cloud native generation such. And so when we, the Falco, we were inspired by previous generation, for example, network intrusion detection, system tools, and a post protection tools and so on. But we created essentially a unique tool that would really be designed for the modern paradigm of containers, cloud CIC, and salt and Falco essentially is able to collect a bunch of brainer information from your applications that are running in the cloud and is a religion that is based on policies that are driven by the community, essentially that allow you to detect misconfigurations attacks and normals conditions in your cloud, in your cloud applications. Recently, we announced that the extension of Falco to support a cloud infrastructure and time security by parsing cloud logs, like cloud trail and so on. So now Falba can be used at the same time to protect the workloads that are running in virtual machines or containers. >>And also the cloud infrastructure to give the audience a couple of examples, focused, able to detect if somebody is running a shelf in a radius container, or if somebody is downloading a sensitive by, from an S3 bucket, all of these in real time with Falco, we decided to go really with CR study. This is Degas was one of the team members that started it, but we decided to go to the community right away, because this is one other ingredient. We are talking about the ingredients before, and there's not a successful modern security tool without being able to leverage the community and empower the community to contribute to it, to use it, to validate and so on. And that's also why we contributed Falco to the cloud native computing foundation. So that Falco is a CNCF tool and is blessed by many organizations. We are also partnering with many companies, including Amazon. Last year, we released that far gate support for Falco. And that was done is a project that was done in cooperation with Amazon, so that we could have strong runtime security for the containers that are running in. >>Well, I've got to say, first of all, congratulations. And I think that's a bold move to donate or not donate contribute to the open source community because you're enabling a lot of people to do great things. And some people might be scared. They think they might be foreclosing and beneficial in the future, but in the reality, that is the new business model open source. So I think that's worth calling out and congratulations. This is the new commercial open source paradigm. And it kind of leads into my last question, which is why is security well-positioned to benefit from open source besides the fact that the new model of getting people enabled and getting scale and getting standards like you're doing, makes everybody win. And again, that's a community model. That's not a proprietary approach. So again, source again, big part of this. Why was security benefit from opensource? >>I am a strong believer. I mean, we are in a better, we could say we are in a war, right? The good guys versus the bad guys. The internet is full of bad guys. And these bad guys are coordinated, are motivated, are sometimes we'll find it. And we'll equip. We win only if we fight this war as a community. So the old paradigm of vendors building their own Eva towers, you know, their own self-contained ecosystems and that the us as users as, as, as customers, every many different, you know, environments that don't communicate with each other, just doesn't take advantage of our capabilities. Our strength is as a community. So we are much stronger against the big guys and we have a much better chance doing when this war, if we adopt a paradigm that allows us to work together. Think only about for example, I don't know, companies any to train, you know, the workforce on the security best practices on the security tools. >>It's much better to standardize on something, build the stack that is accepted by everybody and tell it can focus on learning the stack and becoming a master of the steak rounded rather than every single organization naming the different tool. And, and then B it's very hard to attract talent and to have the right, you know, people that can help you with, with your issues in, in, in, in, in, with your goals. So the future of security is going to be open source. I'm a strong believer in that, and we'll see more and more examples like Falco of initiatives that really start with, with the community and for the community. >>Like we always say an open, open winds, always turn the lights on, put the code out there. And I think, I think the community model is winning. Congratulations, Loris Dajani CTO and founder of SIS dig congratulatory success. And thank you for coming on the cube for the ADB startup showcase open cloud innovations. Thanks for coming on. Okay. Is the cube stay with us all day long every day with the cube, check us out the cube.net. I'm John furrier. Thanks for watching.

Published Date : Jan 26 2022

SUMMARY :

Good to see you And the key point of the showcase is to talk about the startups that are making a difference and, but because it tends to be, you know, easier and a little bit with a little bit less ration. for misconfigurations that then tend to be, you know, very often the sewers You know, most of the sophisticated enterprises I like to compare, you know, the old data center in that the metaphor data that they like to use is an amusement park, right? What's the secret sauce to making real identifying real-time threats in the cultural information and being able to correlate it to there's no full security the paradigm needs to change to, okay, the data is coming in second by second, How does the new technology with cloud and machine learning change And then from the performance point of view, very much, for example, with Falco, you know, You need to know that you need to maybe limit a little bit this expressiveness, you know, You get the cloud scale, you get the integration and open. over SEP, you know, and that, and with these ingredients, it's possible to bake Oh, so I've got to ask you about Falco because it's come up a lot. on policies that are driven by the community, essentially that allow you to detect And also the cloud infrastructure to give the audience a couple of examples, And I think that's a bold move to donate or not donate contribute that the us as users as, as, as customers, to attract talent and to have the right, you know, people that can help you with, And thank you for coming

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LauraPERSON

0.99+

AmazonORGANIZATION

0.99+

Loris DajaniPERSON

0.99+

Loris DegioanniPERSON

0.99+

two machinesQUANTITY

0.99+

Loris DajaniPERSON

0.99+

73%QUANTITY

0.99+

ParisLOCATION

0.99+

27%QUANTITY

0.99+

CiscoORGANIZATION

0.99+

Last yearDATE

0.99+

FalcoORGANIZATION

0.99+

millionsQUANTITY

0.99+

sixQUANTITY

0.99+

FarcoORGANIZATION

0.99+

John farrierPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

five minutesQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

one pointQUANTITY

0.99+

oneQUANTITY

0.99+

TodayDATE

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

cube.netOTHER

0.97+

BillyPERSON

0.96+

a dayQUANTITY

0.95+

SIS digORGANIZATION

0.94+

one other ingredientQUANTITY

0.94+

OneQUANTITY

0.93+

C T OORGANIZATION

0.91+

Ep 1QUANTITY

0.89+

secondQUANTITY

0.89+

80QUANTITY

0.88+

singleQUANTITY

0.88+

few days agoDATE

0.88+

one very important topicQUANTITY

0.87+

hundreds of thousands of machinesQUANTITY

0.86+

FalbaTITLE

0.85+

S3TITLE

0.83+

single itemQUANTITY

0.83+

every geographyQUANTITY

0.8+

every countryQUANTITY

0.78+

AWS Startup Showcase S2EVENT

0.75+

three bucketsQUANTITY

0.75+

CTOPERSON

0.75+

60, 66%QUANTITY

0.74+

CloudTrailTITLE

0.74+

40 million plus million downloadsQUANTITY

0.73+

containersQUANTITY

0.73+

twoQUANTITY

0.73+

John furrierPERSON

0.73+

DegasPERSON

0.72+

millions of events per secondQUANTITY

0.67+

single end pointQUANTITY

0.67+

season two themeQUANTITY

0.65+

firstQUANTITY

0.63+

ADBORGANIZATION

0.6+

KubernetesORGANIZATION

0.59+

episode oneQUANTITY

0.59+

RheodyneORGANIZATION

0.59+

studyORGANIZATION

0.56+

step oneQUANTITY

0.55+

seasonOTHER

0.54+

EvaORGANIZATION

0.53+

teamQUANTITY

0.53+

SEPTITLE

0.52+

CACDORGANIZATION

0.52+

everyQUANTITY

0.52+

viewQUANTITY

0.5+

CRTITLE

0.49+

S3COMMERCIAL_ITEM

0.35+

Sandeep Lahane and Shyam Krishnaswamy | KubeCon + CloudNative Con NA 2021


 

>>Okay, welcome back everyone. To the cubes coverage here, coop con cloud native con 2021 in person. The Cuba's here. I'm John farrier hosted the queue with Dave Nicholson, my cohost and cloud analyst, man. It's great to be back, uh, in person. We also have a hybrid event. We've got two great guests here, the founders of deep fence, sham, Krista Swami, C co-founder and CTO, and said deep line founder. It's great to have you on. This is a super important topic. As cloud native is crossed over. Everyone's talking about it mainstream, blah, blah, blah. But security is driving the agenda. You guys are in the middle of it. Cutting edge approach and news >>Like, like we were talking about John, we had operating at the intersection of the awesome desk, right? Open source security and cloud cloud native, essentially. Absolutely. And today's a super exciting day for us. We're launching something called track pepper, Apache V2, completely open source. Think of it as an x-ray or MRI scan for your cloud scan, you know, visualize this cloud at scale, all of the modalities, essentially, we look at cloud as a continuum. It's not a single modality it's containers. It's communities, it's William to settle we'll list all of them. Co-exist side by side. That's how we look at it and threat map. It essentially allows you to visualize all of this in real time, think of fed map, but as something that you, that, that takes over the Baton from the CIS unit, when the lift shift left gets over, that's when the threat pepper comes into picture. So yeah, super excited. >>It's like really gives that developer and the teams ops teams visibility into kind of health statistics of the cloud. But also, as you said, it's not just software mechanisms. The cloud is evolving, new sources being turned on and off. No one even knows what's going on. Sometimes this is a really hidden problem, right? Yeah, >>Absolutely. The basic problem is, I mean, I would just talk to, you know, a gentleman 70 of this morning is two 70 billion. Plus public cloud spent John two 70 billion plus even 3 billion, 30 billion they're saying right. Uh, projected revenue. And there is not even a single community tool to visualize all the clouds and all the cloud modalities at scale, let's start there. That's what we sort of decided, you know what, let's start with utilizing everything else there. And then look for known badness, which is the vulnerabilities, which still remains the biggest attack vector. >>Sure. Tell us about some of the hood. How does this all work cloud scale? Is it a cloud service managed service it's code? Take us out, take us through product. Absolutely. >>So, so, but before that, right, there's one small point that Sandeep mentioned. And Richard, I'd like to elaborate here, right? He spoke about the whole cloud spending such a large volume, right? If you look at the way people look at applications today, it's not just single clone anymore. It's multicloud multi regions across diverse plants, right? What does the solution to look at what my interests are to this point? That is a missing piece here. And that is what we're trying to tackle. And that is where we are going as open source. Coming back to your question, right? How does this whole thing work? So we have a completely on-prem model, right? Where customers can download the code today, install it. It can bill, we give binary stool and Shockley just as the exciting announcement that came out today, you're going to see somewhat exciting entrepreneurs. That's going to make a lot more easy for folks out there all day. Yeah, that's fine. >>So how does this, how does this all fit into security as a micro service and your, your vision of that? >>Absolutely. Absolutely. You know, I'll tell you, this has to do with the one of the continual conferences I would sort of when I was trying to get an idea, trying to shape the whole vision really? Right. Hey, what about syncretism? Microservice? I would go and ask people. They mentioned that sounds, that makes sense. Everything is becoming a microservice. Really. So what you're saying is you're going to deploy one more microservice, just like I deploy all of my other microservices. And that's going to look after my microservices. That compute back makes logical sense, essentially. That was the Genesis of that terminology. So defense essentially is deployed as a microservice. You go to scale, it's deployed, operated just like you to your microservices. So no code changes, no other tool chain changes. It just is yet another microservice. That's going to look after you talk about >>The, >>So there's one point I would like to add here, which is something very interesting, right? The whole concept of microservice came from, if you remember the memo from Jeff Bezos, that everybody's going to go, Microsoft would be fired. That gave rise to a very conventional unconditionally of thinking about their applications. Our deep friends, we believe that security should be. Now. You should bring the same unconventional way of thinking to security. Your security is all bottom up. No, it has to start popping up. So your applications on microservice, your security should also be a micro. >>So you need a microservice for a microservice security for the security. You're starting to get into a paradigm shift where you starting to see the API economy that bayzos and Amazon philosophy and their approach go Beanstream. So when I got to ask you, because this is a trend we've been watching and reporting on the actual application development processes, changing from the old school, you know, life cycle, software defined life cycle to now you've got machine learning and bots. You have AI. Now you have people are building apps differently. And the speed of which they want to code is high. And then other teams are slowing them down. So I've heard security teams just screw people over a couple of days. Oh my God, I can wait five days. No, it used to be five weeks. Now it's five days. They think that's progress. They want five minutes, the developers in real time. So this is a real deal optimum. >>Well, you know what? Shift left was a good thing. Instill a good thing. It helps you sort of figure out the issues early on in the development life cycle, essentially. Right? And so you started weaving in security early on and it stays with you. The problem is we are hydrating. So frequently you end up with a few hundred vulnerabilities every time you scan oftentimes few thousand and then you go to runtime and you can't really fix all these thousand one. You know? So this is where, so there is a little bit of a gap there. If you're saying, if look at the CIC cycle, the in financial cycle that they show you, right. You've got the far left, which is where you have the SAS tools, snake and all of that. And then you've got the center where, which is where you hand off this to ops. >>And then on the right side, you've got tech ops defense essentially starts in the middle and says, look, I know you've had thousand one abilities. Okay. But at run time, I see only one of those packages is loaded in memory. And only that is getting traffic. You go and fix that one because that's going to heart. You see what I'm saying? So that gap is what we're doing. So you start with the left, we come in in the middle and stay with you throughout, you know, till the whole, uh, she asks me. Yeah, well that >>Th that, that touches on a subject. What are the, what are the changes that we're seeing? What are the new threats that are associated with containerization and kind of coupled with that, look back on traditional security methods and how are our traditional security methods failing us with those new requirements that come out of the microservices and containerized world. And so, >>So having, having been at FireEye, I'll tell you I've worked on their windows products and Juniper, >>And very, very deeply involved in. >>And in fact, you know what I mean, at the company, we even sold a product to Palo Alto. So having been around the space, really, I think it's, it's, it's a, it's a foregone conclusion to say that attackers have become more sophisticated. Of course they have. Yeah. It's not a single attack vector, which gets you down anymore. It's not a script getting somewhere shooting who just sending one malicious HTP request exploiting, no, these are multi-vector multi-stage attacks. They, they evolve over time in space, you know? And then what happens is I could have shot a revolving with time and space, one notable cause of piling up. Right? And on the other side, you've got the infrastructure, which is getting fragmented. What I mean by fragmented is it's not one data center where everything would look and feel and smell similar it's containers and tuberosities and several lessons. All of that stuff is hackable, right? So you've got that big shift happening there. You've got attackers, how do you build visibility? So, in fact, initially we used to, we would go and speak with, uh, DevSecOps practitioner say, Hey, what is the coalition? Is it that you don't have enough scanners to scan? Is it that at runtime? What is the main problem? It's the lack of visibility, lack of observability throughout the life cycle, as well as through outage, it was an issue with allegation. >>And the fact that the attackers know that too, they're exploiting the fact that they can't see they're blind. And it's like, you know what? Trying to land a plane that flew yesterday and you think it's landing tomorrow. It's all like lagging. Right? Exactly. So I got to ask you, because this has comes up a lot, because remember when we're in our 11th season with the cube, and I remember conversations going back to 2010, a cloud's not secure. You know, this is before everyone realized shit, the club's better than on premises if you have it. Right. So a trend is emerged. I want to get your thoughts on this. What percentage of the hacks are because the attackers are lazier than the more sophisticated ones, because you see two buckets I'm going to get, I'm going to work hard to get this, or I'm going to go for the easy low-hanging fruit. Most people have just a setup that's just low hanging fruit for the hackers versus some sort of complex or thought through programmatic cloud system, because now is actually better if you do it. Right. So the more sophisticated the environment, the harder it is for the hackers, AK Bob wire, whatever you wanna call it, what level do we cross over? >>When does it go from the script periods to the, the, >>Katie's kind of like, okay, I want to go get the S3 bucket or whatever. There's like levels of like laziness. Yeah. Okay. I, yeah. Versus I'm really going to orchestrate Spearfish social engineer, the more sophisticated economy driven ones. Yeah. >>I think, you know what, this attackers, the hacks aren't being conducted the way they worked in the 10, five years ago, isn't saying that they been outsourced, there are sophisticated teams for building exploiters. This is the whole industry up there. Even the nation, it's an economy really. Right. So, um, the known badness or the known attacks, I think we have had tools. We have had their own tools, signature based tools, which would know, look for certain payloads and say, this is that I know it. Right. You get the stuff really starts sort of, uh, getting out of control when you have so many sort of different modalities running side by side. So much, so much moving attack surfaces, they will evolve. And you never know that you've scanned enough because you never happened because we just pushed the code. >>Yeah. So we've been covering the iron debt. Kim retired general, Keith Alexander, his company. They have this iron dome concept where there's more collective sharing. Um, how do you see that trend? Because I can almost imagine that the open-source man is going to love what you guys got. You're going to probably feed on it, like it's nobody's business, but then you start thinking, okay, we're going to be open. And you have a platform approach, not so much a tool based approach. So just give me tools. We all know that when does it, we cross over to the Nirvana of like real security sharing. Real-time telemetry data. >>And I want to answer this in two parts. The first part is really a lot of this wisdom is only in the community. It's a tribal knowledge. It's their informal feeds in from get up tickets. And you know, a lot of these things, what we're really doing with threat map, but as we are consolidating that and giving it out as a sort of platform that you can use, I like to go for free. This is the part you will never go to monetize this. And we are certain about disaster. What we are monetizing instead is you have, like I said, the x-ray or MRI scan of the cloud, which tells you what the pain points are. This is feel free. This is public collective good. This is a Patrick reader. This is for free. It's shocking. >>I took this long to get to that point, by the way, in this discussion. >>Yeah, >>This is this timing's perfect. >>Security is collective good. Right? And if you're doing open source, community-based, you know, programs like this is for the collector group. What we do look, this whole other set map is going to be open source. We going to make it a platform and our commercial version, which is called fetch Stryker, which is where we have our core IP, which is basically think about this way, right? If you figured out all the pain points and using tech map, or this was a free, and now you wanted the remedy for that pain feed to target a defense, we targeted quarantining of those statin workloads and all that stuff. And that's what our IP is. What we really do there is we said, look, you figured out the attack surface using tech fabric. Now I'm going to use threat Stryker to protect their attacks and stress >>Free. Not free to, or is that going to be Fort bang? >>Oh, that's for, okay. >>That's awesome. So you bring the goodness to the party, the goods to the party, again, share that collective, see where that goes. And the Stryker on top is how you guys monetize. >>And that's where we do some uniquely normal things. I would want to talk about that. If, if, if, if you know public probably for 30 seconds or so unique things we do in industry, which is basically being able to monitor what comes in, what goes out and what changes across time and space, because look, most of the modern attacks evolve over time and space, right? So you go to be able to see things like this. Here's a party structure, which has a vulnerability threats. Mapper told you that to strike. And what it does is it tells you a bunch of stress has a vulnerable again, know that somebody is sending a Melissa's HTP request, which has a malicious payload. And you know what, tomorrow there's a file system change. And there is outbound connection going to some funny place. That is the part that we're wanting this. >>Yeah. And you give away the tool to identify the threats and sell the hammer. >>That's giving you protection. >>Yeah. Yeah. Awesome. I love you guys love this product. I love how you're doing it. I got to ask you to define what is security as a microservice. >>So security is a microservice is a deployment modality for us. So defense, what defense has is one console. So defense is currently self posted by the customers within the infrastructure going forward. We'll also be launching a SAS version, the cloud version of it. But what happens as part of this deployment is they're running the management console, which is the gooey, and then a tiny sensor, which is collecting telemetric that is deployed as a microservice is what I'm saying. So you've got 10 containers running defenses level of container. That's, that's an eight or the Microsoft risk. And it utilizes, uh, EDP F you know, for tracing and all that stuff. Yeah. >>Awesome. Well, I think this is the beginning of a shift in the industry. You start to see dev ops and cloud native technologies become the operating model, not just dev dev ops are now in play and infrastructure as code, which is the ethos of a cloud generation is security is code. That's true. That's what you guys are doing. Thanks for coming on. Really appreciate it. Absolutely breaking news here in the queue, obviously great stuff. Open source continues to grow and win in the new model. Collaboration is the cube bringing you all the cover day one, the three days. I'm Jennifer, your host with Dave Nicholson. Thanks for watching.

Published Date : Oct 13 2021

SUMMARY :

It's great to have you on. It essentially allows you to visualize all of this in real time, think of fed map, but as something that you, It's like really gives that developer and the teams ops teams visibility into That's what we sort of decided, you know what, let's start with utilizing everything else there. How does this all work cloud scale? the solution to look at what my interests are to this point? That's going to look after you talk about came from, if you remember the memo from Jeff Bezos, that everybody's going to go, Microsoft would be fired. So you need a microservice for a microservice security for the security. You've got the far left, which is where you have the SAS So you start with the left, we come in in the middle and stay with you throughout, What are the new threats that are associated with containerization and kind And in fact, you know what I mean, at the company, we even sold a product to Palo Alto. the environment, the harder it is for the hackers, AK Bob wire, whatever you wanna call it, what level the more sophisticated economy driven ones. And you never know that you've scanned enough because Because I can almost imagine that the open-source man is going to love what you guys got. This is the part you will never go to monetize this. What we really do there is we said, look, you figured out the attack surface using tech And the Stryker on top is how you guys monetize. And what it does is it tells you a bunch of stress has a vulnerable I got to ask you to define what is security as a microservice. And it utilizes, uh, EDP F you know, for tracing and all that stuff. Collaboration is the cube bringing you all the cover day one, the three days.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

Dave NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

Keith AlexanderPERSON

0.99+

JohnPERSON

0.99+

five weeksQUANTITY

0.99+

five daysQUANTITY

0.99+

30 secondsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

five minutesQUANTITY

0.99+

KimPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JenniferPERSON

0.99+

Jeff BezosPERSON

0.99+

John farrierPERSON

0.99+

Krista SwamiPERSON

0.99+

Shyam KrishnaswamyPERSON

0.99+

two partsQUANTITY

0.99+

2010DATE

0.99+

Sandeep LahanePERSON

0.99+

tomorrowDATE

0.99+

yesterdayDATE

0.99+

3 billionQUANTITY

0.99+

10 containersQUANTITY

0.99+

todayDATE

0.99+

PatrickPERSON

0.99+

three daysQUANTITY

0.99+

KatiePERSON

0.99+

11th seasonQUANTITY

0.99+

30 billionQUANTITY

0.99+

KubeConEVENT

0.99+

two bucketsQUANTITY

0.98+

bayzosORGANIZATION

0.98+

10DATE

0.98+

one consoleQUANTITY

0.98+

first partQUANTITY

0.98+

MelissaPERSON

0.98+

oneQUANTITY

0.98+

two great guestsQUANTITY

0.98+

Palo AltoLOCATION

0.98+

FireEyeORGANIZATION

0.97+

one pointQUANTITY

0.96+

SandeepPERSON

0.96+

CloudNative ConEVENT

0.96+

JuniperORGANIZATION

0.96+

CubaLOCATION

0.96+

single modalityQUANTITY

0.96+

single attackQUANTITY

0.95+

eightQUANTITY

0.94+

twoQUANTITY

0.94+

70QUANTITY

0.94+

ShockleyORGANIZATION

0.93+

one small pointQUANTITY

0.92+

this morningDATE

0.9+

single cloneQUANTITY

0.89+

thousandQUANTITY

0.89+

day oneQUANTITY

0.88+

SASORGANIZATION

0.87+

70 billionQUANTITY

0.85+

single community toolQUANTITY

0.85+

WilliamPERSON

0.83+

BatonLOCATION

0.83+

five years agoDATE

0.83+

S3COMMERCIAL_ITEM

0.83+

NA 2021EVENT

0.81+

one data centerQUANTITY

0.81+

CTOPERSON

0.79+

con 2021EVENT

0.78+

NirvanaLOCATION

0.78+

ApacheORGANIZATION

0.72+

StrykerORGANIZATION

0.71+

few thousandQUANTITY

0.7+

DevSecOpsORGANIZATION

0.7+

coop con cloud nativeORGANIZATION

0.69+

one abilitiesQUANTITY

0.69+

a couple of daysQUANTITY

0.68+

hundred vulnerabilitiesQUANTITY

0.67+

one more microserviceQUANTITY

0.64+

BeanstreamORGANIZATION

0.64+

track pepperORGANIZATION

0.63+

MapperPERSON

0.62+

AK BobPERSON

0.59+

CISORGANIZATION

0.56+

fenceORGANIZATION

0.54+

V2COMMERCIAL_ITEM

0.45+

StrykerTITLE

0.39+

Gil Geron, Orca Security | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(upbeat electronic music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. The Next Big Thing in AI, Security, and Life Sciences. In this segment, we feature Orca Security as a notable trend setter within, of course, the security track. I'm your host, Dave Vellante. And today we're joined by Gil Geron. Who's the co-founder and Chief Product Officer at Orca Security. And we're going to discuss how to eliminate cloud security blind spots. Orca has a really novel approach to cybersecurity problems, without using agents. So welcome Gil to today's sessions. Thanks for coming on. >> Thank you for having me. >> You're very welcome. So Gil, you're a disruptor in security and cloud security specifically and you've created an agentless way of securing cloud assets. You call this side scanning. We're going to get into that and probe that a little bit into the how and the why agentless is the future of cloud security. But I want to start at the beginning. What were the main gaps that you saw in cloud security that spawned Orca Security? >> I think that the main gaps that we saw when we started Orca were pretty similar in nature to gaps that we saw in legacy, infrastructures, in more traditional data centers. But when you look at the cloud when you look at the nature of the cloud the ephemeral nature, the technical possibilities and disruptive way of working with a data center, we saw that the usage of traditional approaches like agents in these environments is lacking, it actually not only working as well as it was in the legacy world, it's also, it's providing less value. And in addition, we saw that the friction between the security team and the IT, the engineering, the DevOps in the cloud is much worse or how does that it was, and we wanted to find a way, we want for them to work together to bridge that gap and to actually allow them to leverage the cloud technology as it was intended to gain superior security than what was possible in the on-prem world. >> Excellent, let's talk a little bit more about agentless. I mean, maybe we could talk a little bit about why agentless is so compelling. I mean, it's kind of obvious it's less intrusive. You've got fewer processes to manage, but how did you create your agentless approach to cloud security? >> Yes, so I think the basis of it all is around our mission and what we try to provide. We want to provide seamless security because we believe it will allow the business to grow faster. It will allow the business to adopt technology faster and to be more dynamic and achieve goals faster. And so we've looked on what are the problems or what are the issues that slow you down? And one of them, of course, is the fact that you need to install agents that they cause performance impact, that they are technically segregated from one another, meaning you need to install multiple agents and they need to somehow not interfere with one another. And we saw this friction causes organization to slow down their move to the cloud or slow down the adoption of technology. In the cloud, it's not only having servers, right? You have containers, you have manage services, you have so many different options and opportunities. And so you need a different approach on how to secure that. And so when we understood that this is the challenge, we decided to attack it in three, using three periods; one, trying to provide complete security and complete coverage with no friction, trying to provide comprehensive security, which is taking an holistic approach, a platform approach and combining the data in order to provide you visibility into all of your security assets, and last but not least of course, is context awareness, meaning being able to understand and find these the 1% that matter in the environment. So you can actually improve your security posture and improve your security overall. And to do so, you had to have a technique that does not involve agents. And so what we've done, we've find a way that utilizes the cloud architecture in order to scan the cloud itself, basically when you integrate Orca, you are able within minutes to understand, to read, and to view all of the risks. We are leveraging a technique that we are calling side scanning that uses the API. So it uses the infrastructure of the cloud itself to read the block storage device of every compute instance and every instance, in the environment, and then we can deduce the actual risk of every asset. >> So that's a clever name, side scanning. Tell us a little bit more about that. Maybe you could double click on, on how it works. You've mentioned it's looking into block storage and leveraging the API is a very, very clever actually quite innovative. But help us understand in more detail how it works and why it's better than traditional tools that we might find in this space. >> Yes, so the way that it works is that by reading the block storage device, we are able to actually deduce what is running on your computer, meaning what kind of waste packages applications are running. And then by con combining the context, meaning understanding that what kind of services you have connected to the internet, what is the attack surface for these services? What will be the business impact? Will there be any access to PII or any access to the crown jewels of the organization? You can not only understand the risks. You can also understand the impact and then understand what should be our focus in terms of security of the environment. Different factories, the fact that we are doing it using the infrastructure itself, we are not installing any agents, we are not running any packet. You do not need to change anything in your architecture or design of how you use the cloud in order to utilize Orca Orca is working in a pure SaaS way. And so it means that there is no impact, not on cost and not on performance of your environment while using Orca. And so it reduces any friction that might happen with other parties of the organization when you enjoy the security or improve your security in the cloud. >> Yeah, and no process management intrusion. Now, I presume Gil that you eat your own cooking, meaning you're using your own product. First of all, is that true? And if so, how has your use of Orca as a chief product officer help you scale Orca as a company? >> So it's a great question. I think that something that we understood early on is that there is a, quite a significant difference between the way you architect your security in cloud and also the way that things reach production, meaning there's a difference, that there's a gap between how you imagined, like in everything in life how you imagine things will be and how they are in real life in production. And so, even though we have amazing customers that are extremely proficient in security and have thought of a lot of ways of how to secure the environment. Ans so, we of course, we are trying to secure environment as much as possible. We are using Orca because we understand that no one is perfect. We are not perfect. We might, the engineers might, my engineers might make mistakes like every organization. And so we are using Orca because we want to have complete coverage. We want to understand if we are doing any mistake. And sometimes the gap between the architecture and the hole in the security or the gap that you have in your security could take years to happen. And you need a tool that will constantly monitor your environment. And so that's why we are using Orca all around from day one not to find bugs or to do QA, we're doing it because we need security to our cloud environment that will provide these values. And so we've also passed the compliance auditing like SOC 2 and ISO using Orca and it expedited and allowed us to do these processes extremely fast because of having all of these guardrails and metrics has. >> Yeah, so, okay. So you recognized that you potentially had and did have that same problem as your customer has been. Has it helped you scale as a company obviously but how has it helped you scale as a company? >> So it helped us scale as a company by increasing the trust, the level of trust customer having Orca. It allowed us to adopt technology faster, meaning we need much less diligence or exploration of how to use technology because we have these guardrails. So we can use the richness of the technology that we have in the cloud without the need to stop, to install agents, to try to re architecture the way that we are using the technology. And we simply use it. We simply use the technology that the cloud offer as it is. And so it allows you a rapid scalability. >> Allows you allows you to move at the speed of cloud. Now, so I'm going to ask you as a co-founder, you got to wear many hats first of a co-founder and the leadership component there. And also the chief product officer, you got to go out, you got to get early customers, but but even more importantly you have to keep those customers retention. So maybe you can describe how customers have been using Orca. Did they, what was their aha moment that you've seen customers react to when you showcase the new product? And then how have you been able to keep them as loyal partners? >> So I think that we are very fortunate, we have a lot of, we are blessed with our customers. Many of our customers are vocal customers about what they like about Orca. And I think that something that comes along a lot of times is that this is a solution they have been waiting for. I can't express how many times I hear that I could go on a call and a customer says, "I must say, I must share. "This is a solution I've been looking for." And I think that in that respect, Orca is creating a new standard of what is expected from a security solution because we are transforming the security all in the company from an inhibitor to an enabler. You can use the technology. You can use new tools. You can use the cloud as it was intended. And so (coughs) we have customers like one of these cases is a customer that they have a lot of data and they're all super scared about using S3 buckets. We call over all of these incidents of these three buckets being breached or people connecting to an s3 bucket and downloading the data. So they had a policy saying, "S3 bucket should not be used. "We do not allow any use of S3 bucket." And obviously you do need to use S3 bucket. It's a powerful technology. And so the engineering team in that customer environment, simply installed a VM, installed an FTP server, and very easy to use password to that FTP server. And obviously two years later, someone also put all of the customer databases on that FTP server, open to the internet, open to everyone. And so I think it was for him and for us as well. It was a hard moment. First of all, he planned that no data will be leaked but actually what happened is way worse. The data was open to the to do to the world in a technology that exists for a very long time. And it's probably being scanned by attackers all the time. But after that, he not only allowed them to use S3 bucket because he knew that now he can monitor. Now, you can understand that they are using the technology as intended, now that they are using it securely. It's not open to everyone it's open in the right way. And there was no PII on that S3 bucket. And so I think the way he described it is that, now when he's coming to a meeting about things that needs to be improved, people are waiting for this meeting because he actually knows more than what they know, what they know about the environment. And I see it really so many times where a simple mistake or something that looks benign when you look at the environment in a holistic way, when you are looking on the context, you understand that there is a huge gap. That should be the breech. And another cool example was a case where a customer allowed an access from a third party service that everyone trusts to the crown jewels of the environment. And he did it in a very traditional way. He allowed a certain IP to be open to that environment. So overall it sounds like the correct way to go. You allow only a specific IP to access the environment but what he failed to to notice is that everyone in the world can register for free for this third-party service and access the environment from this IP. And so, even though it looks like you have access from a trusted service, a trusted third party service, when it's a Saas service, it's actually, it can mean that everyone can use it in order to access the environment and using Orca, you saw immediately the access, you saw immediately the risk. And I see it time after time that people are simply using Orca to monitor, to guardrail, to make sure that the environment stays safe throughout time and to communicate better in the organization to explain the risk in a very easy way. And the, I would say the statistics show that within few weeks, more than 85% of the different alerts and risks are being fixed, and think it comes to show how effective it is and how effective it is in improving your posture, because people are taking action. >> Those are two great examples, and of course they have often said that the shared responsibility model is often misunderstood. And those two examples underscore thinking that, "oh I hear all this, see all this press about S3, but it's up to the customer to secure the endpoint components et cetera. Configure it properly is what I'm saying. So what an unintended consequence, but but Orca plays a role in helping the customer with their portion of that shared responsibility. Obviously AWS is taking care of this. Now, as part of this program we ask a little bit of a challenging question to everybody because look it as a startup, you want to do well you want to grow a company. You want to have your employees, you know grow and help your customers. And that's great and grow revenues, et cetera but we feel like there's more. And so we're going to ask you because the theme here is all about cloud scale. What is your defining contribution to the future of cloud at scale, Gil? >> So I think that cloud is allowed the revolution to the data centers, okay? The way that you are building services, the way that you are allowing technology to be more adaptive, dynamic, ephemeral, accurate, and you see that it is being adopted across all vendors all type of industries across the world. I think that Orca is the first company that allows you to use this technology to secure your infrastructure in a way that was not possible in the on-prem world, meaning that when you're using the cloud technology and you're using technologies like Orca, you're actually gaining superior security that what was possible in the pre cloud world. And I think that, to that respect, Orca is going hand in hand with the evolution and actually revolutionizes the way that you expect to consume security, the way that you expect to get value, from security solutions across the world. >> Thank You for that Gil. And so we're at the end of our time, but we'll give you a chance for final wrap up. Bring us home with your summary, please. >> So I think that Orca is building the cloud security solution that actually works with its innovative aid agentless approach to cyber security to gain complete coverage, comprehensive solution and to gain, to understand the complete context of the 1% that matters in your security challenges across your data centers in the cloud. We are bridging the gap between the security teams, the business needs to grow and to do so in the paste of the cloud, I think the approach of being able to install within minutes, a security solution in getting complete understanding of your risk which is goes hand in hand in the way you expect and adopt cloud technology. >> That's great Gil. Thanks so much for coming on. You guys doing awesome work. Really appreciate you participating in the program. >> Thank you very much. >> And thank you for watching this AWS Startup Showcase. We're covering the next big thing in AI, Security, and Life Science on theCUBE. Keep it right there for more great content. (upbeat music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase. agentless is the future of cloud security. and the IT, the engineering, but how did you create And to do so, you had to have a technique into block storage and leveraging the API is that by reading the you eat your own cooking, or the gap that you have and did have that same problem And so it allows you a rapid scalability. to when you showcase the new product? the to do to the world And so we're going to ask you the way that you expect to get value, but we'll give you a in the way you expect and participating in the program. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

OrcaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

1%QUANTITY

0.99+

GilPERSON

0.99+

Gil GeronPERSON

0.99+

oneQUANTITY

0.99+

more than 85%QUANTITY

0.99+

two examplesQUANTITY

0.99+

two years laterDATE

0.99+

Orca SecurityORGANIZATION

0.98+

threeQUANTITY

0.98+

two great examplesQUANTITY

0.98+

ISOORGANIZATION

0.98+

three bucketsQUANTITY

0.97+

three periodsQUANTITY

0.96+

todayDATE

0.96+

S3TITLE

0.96+

FirstQUANTITY

0.95+

firstQUANTITY

0.94+

first companyQUANTITY

0.91+

day oneQUANTITY

0.9+

SOC 2TITLE

0.87+

theCUBEORGANIZATION

0.86+

SaasORGANIZATION

0.82+

Startup ShowcaseEVENT

0.8+

s3TITLE

0.7+

doubleQUANTITY

0.57+

GilORGANIZATION

0.55+

Next Big ThingTITLE

0.51+

yearsQUANTITY

0.5+

S3COMMERCIAL_ITEM

0.47+

Paula D'Amico, Webster Bank | Io Tahoe | Enterprise Data Automation


 

>> Narrator: From around the Globe, it's theCube with digital coverage of Enterprise Data Automation, and event series brought to you by Io-Tahoe. >> Everybody, we're back. And this is Dave Vellante, and we're covering the whole notion of Automated Data in the Enterprise. And I'm really excited to have Paula D'Amico here. Senior Vice President of Enterprise Data Architecture at Webster Bank. Paula, good to see you. Thanks for coming on. >> Hi, nice to see you, too. >> Let's start with Webster bank. You guys are kind of a regional I think New York, New England, believe it's headquartered out of Connecticut. But tell us a little bit about the bank. >> Webster bank is regional Boston, Connecticut, and New York. Very focused on in Westchester and Fairfield County. They are a really highly rated regional bank for this area. They hold quite a few awards for the area for being supportive for the community, and are really moving forward technology wise, they really want to be a data driven bank, and they want to move into a more robust group. >> We got a lot to talk about. So data driven is an interesting topic and your role as Data Architecture, is really Senior Vice President Data Architecture. So you got a big responsibility as it relates to kind of transitioning to this digital data driven bank but tell us a little bit about your role in your Organization. >> Currently, today, we have a small group that is just working toward moving into a more futuristic, more data driven data warehousing. That's our first item. And then the other item is to drive new revenue by anticipating what customers do, when they go to the bank or when they log in to their account, to be able to give them the best offer. And the only way to do that is you have timely, accurate, complete data on the customer and what's really a great value on offer something to offer that, or a new product, or to help them continue to grow their savings, or do and grow their investments. >> Okay, and I really want to get into that. But before we do, and I know you're, sort of partway through your journey, you got a lot to do. But I want to ask you about Covid, how you guys handling that? You had the government coming down and small business loans and PPP, and huge volume of business and sort of data was at the heart of that. How did you manage through that? >> We were extremely successful, because we have a big, dedicated team that understands where their data is and was able to switch much faster than a larger bank, to be able to offer the PPP Long's out to our customers within lightning speed. And part of that was is we adapted to Salesforce very for we've had Salesforce in house for over 15 years. Pretty much that was the driving vehicle to get our PPP loans in, and then developing logic quickly, but it was a 24 seven development role and get the data moving on helping our customers fill out the forms. And a lot of that was manual, but it was a large community effort. >> Think about that too. The volume was probably much higher than the volume of loans to small businesses that you're used to granting and then also the initial guidelines were very opaque. You really didn't know what the rules were, but you were expected to enforce them. And then finally, you got more clarity. So you had to essentially code that logic into the system in real time. >> I wasn't directly involved, but part of my data movement team was, and we had to change the logic overnight. So it was on a Friday night it was released, we pushed our first set of loans through, and then the logic changed from coming from the government, it changed and we had to redevelop our data movement pieces again, and we design them and send them back through. So it was definitely kind of scary, but we were completely successful. We hit a very high peak. Again, I don't know the exact number but it was in the thousands of loans, from little loans to very large loans and not one customer who applied did not get what they needed for, that was the right process and filled out the right amount. >> Well, that is an amazing story and really great support for the region, your Connecticut, the Boston area. So that's fantastic. I want to get into the rest of your story now. Let's start with some of the business drivers in banking. I mean, obviously online. A lot of people have sort of joked that many of the older people, who kind of shunned online banking would love to go into the branch and see their friendly teller had no choice, during this pandemic, to go to online. So that's obviously a big trend you mentioned, the data driven data warehouse, I want to understand that, but what at the top level, what are some of the key business drivers that are catalyzing your desire for change? >> The ability to give a customer, what they need at the time when they need it. And what I mean by that is that we have customer interactions in multiple ways. And I want to be able for the customer to walk into a bank or online and see the same format, and being able to have the same feel the same love, and also to be able to offer them the next best offer for them. But they're if they want looking for a new mortgage or looking to refinance, or whatever it is that they have that data, we have the data and that they feel comfortable using it. And that's an untethered banker. Attitude is, whatever my banker is holding and whatever the person is holding in their phone, that is the same and it's comfortable. So they don't feel that they've walked into the bank and they have to do fill out different paperwork compared to filling out paperwork on just doing it on their phone. >> You actually do want the experience to be better. And it is in many cases. Now you weren't able to do this with your existing I guess mainframe based Enterprise Data Warehouses. Is that right? Maybe talk about that a little bit? >> Yeah, we were definitely able to do it with what we have today the technology we're using. But one of the issues is that it's not timely. And you need a timely process to be able to get the customers to understand what's happening. You need a timely process so we can enhance our risk management. We can apply for fraud issues and things like that. >> Yeah, so you're trying to get more real time. The traditional EDW. It's sort of a science project. There's a few experts that know how to get it. You can so line up, the demand is tremendous. And then oftentimes by the time you get the answer, it's outdated. So you're trying to address that problem. So part of it is really the cycle time the end to end cycle time that you're progressing. And then there's, if I understand it residual benefits that are pretty substantial from a revenue opportunity, other offers that you can make to the right customer, that you maybe know, through your data, is that right? >> Exactly. It's drive new customers to new opportunities. It's enhanced the risk, and it's to optimize the banking process, and then obviously, to create new business. And the only way we're going to be able to do that is if we have the ability to look at the data right when the customer walks in the door or right when they open up their app. And by doing creating more to New York times near real time data, or the data warehouse team that's giving the lines of business the ability to work on the next best offer for that customer as well. >> But Paula, we're inundated with data sources these days. Are there other data sources that maybe had access to before, but perhaps the backlog of ingesting and cleaning in cataloging and analyzing maybe the backlog was so great that you couldn't perhaps tap some of those data sources. Do you see the potential to increase the data sources and hence the quality of the data or is that sort of premature? >> Oh, no. Exactly. Right. So right now, we ingest a lot of flat files and from our mainframe type of front end system, that we've had for quite a few years. But now that we're moving to the cloud and off-prem and on-prem, moving off-prem, into like an S3 Bucket, where that data we can process that data and get that data faster by using real time tools to move that data into a place where, like snowflake could utilize that data, or we can give it out to our market. Right now we're about we do work in batch mode still. So we're doing 24 hours. >> Okay. So when I think about the data pipeline, and the people involved, maybe you could talk a little bit about the organization. You've got, I don't know, if you have data scientists or statisticians, I'm sure you do. You got data architects, data engineers, quality engineers, developers, etc. And oftentimes, practitioners like yourself, will stress about, hey, the data is in silos. The data quality is not where we want it to be. We have to manually categorize the data. These are all sort of common data pipeline problems, if you will. Sometimes we use the term data Ops, which is sort of a play on DevOps applied to the data pipeline. Can you just sort of describe your situation in that context? >> Yeah, so we have a very large data ops team. And everyone that who is working on the data part of Webster's Bank, has been there 13 to 14 years. So they get the data, they understand it, they understand the lines of business. So it's right now. We could the we have data quality issues, just like everybody else does. But we have places in them where that gets cleansed. And we're moving toward and there was very much siloed data. The data scientists are out in the lines of business right now, which is great, because I think that's where data science belongs, we should give them and that's what we're working towards now is giving them more self service, giving them the ability to access the data in a more robust way. And it's a single source of truth. So they're not pulling the data down into their own, like Tableau dashboards, and then pushing the data back out. So they're going to more not, I don't want to say, a central repository, but a more of a robust repository, that's controlled across multiple avenues, where multiple lines of business can access that data. Is that help? >> Got it, Yes. And I think that one of the key things that I'm taking away from your last comment, is the cultural aspects of this by having the data scientists in the line of business, the lines of business will feel ownership of that data as opposed to pointing fingers criticizing the data quality. They really own that that problem, as opposed to saying, well, it's Paula's problem. >> Well, I have my problem is I have data engineers, data architects, database administrators, traditional data reporting people. And because some customers that I have that are business customers lines of business, they want to just subscribe to a report, they don't want to go out and do any data science work. And we still have to provide that. So we still want to provide them some kind of regiment that they wake up in the morning, and they open up their email, and there's the report that they subscribe to, which is great, and it works out really well. And one of the things is why we purchased Io-Tahoe was, I would have the ability to give the lines of business, the ability to do search within the data. And we'll read the data flows and data redundancy and things like that, and help me clean up the data. And also, to give it to the data analysts who say, all right, they just asked me they want this certain report. And it used to take okay, four weeks we're going to go and we're going to look at the data and then we'll come back and tell you what we can do. But now with Io-Tahoe, they're able to look at the data, and then in one or two days, they'll be able to go back and say, Yes, we have the data, this is where it is. This is where we found it. This is the data flows that we found also, which is what I call it, is the break of a column. It's where the column was created, and where it went to live as a teenager. (laughs) And then it went to die, where we archive it. And, yeah, it's this cycle of life for a column. And Io-Tahoe helps us do that. And we do data lineage is done all the time. And it's just takes a very long time and that's why we're using something that has AI in it and machine running. It's accurate, it does it the same way over and over again. If an analyst leaves, you're able to utilize something like Io-Tahoe to be able to do that work for you. Is that help? >> Yeah, so got it. So a couple things there, in researching Io-Tahoe, it seems like one of the strengths of their platform is the ability to visualize data, the data structure and actually dig into it, but also see it. And that speeds things up and gives everybody additional confidence. And then the other piece is essentially infusing AI or machine intelligence into the data pipeline, is really how you're attacking automation. And you're saying it repeatable, and then that helps the data quality and you have this virtual cycle. Maybe you could sort of affirm that and add some color, perhaps. >> Exactly. So you're able to let's say that I have seven cars, lines of business that are asking me questions, and one of the questions they'll ask me is, we want to know, if this customer is okay to contact, and there's different avenues so you can go online, do not contact me, you can go to the bank and you can say, I don't want email, but I'll take texts. And I want no phone calls. All that information. So, seven different lines of business asked me that question in different ways. One said, "No okay to contact" the other one says, "Customer 123." All these. In each project before I got there used to be siloed. So one customer would be 100 hours for them to do that analytical work, and then another analyst would do another 100 hours on the other project. Well, now I can do that all at once. And I can do those types of searches and say, Yes, we already have that documentation. Here it is, and this is where you can find where the customer has said, "No, I don't want to get access from you by email or I've subscribed to get emails from you." >> Got it. Okay. Yeah Okay. And then I want to go back to the cloud a little bit. So you mentioned S3 Buckets. So you're moving to the Amazon cloud, at least, I'm sure you're going to get a hybrid situation there. You mentioned snowflake. What was sort of the decision to move to the cloud? Obviously, snowflake is cloud only. There's not an on-prem, version there. So what precipitated that? >> Alright, so from I've been in the data IT information field for the last 35 years. I started in the US Air Force, and have moved on from since then. And my experience with Bob Graham, was with snowflake with working with GE Capital. And that's where I met up with the team from Io-Tahoe as well. And so it's a proven so there's a couple of things one is Informatica, is worldwide known to move data. They have two products, they have the on-prem and the off-prem. I've used the on-prem and off-prem, they're both great. And it's very stable, and I'm comfortable with it. Other people are very comfortable with it. So we picked that as our batch data movement. We're moving toward probably HVR. It's not a total decision yet. But we're moving to HVR for real time data, which is changed capture data, moves it into the cloud. And then, so you're envisioning this right now. In which is you're in the S3, and you have all the data that you could possibly want. And that's JSON, all that everything is sitting in the S3 to be able to move it through into snowflake. And snowflake has proven to have a stability. You only need to learn and train your team with one thing. AWS as is completely stable at this point too. So all these avenues if you think about it, is going through from, this is your data lake, which is I would consider your S3. And even though it's not a traditional data lake like, you can touch it like a Progressive or Hadoop. And then into snowflake and then from snowflake into sandbox and so your lines of business and your data scientists just dive right in. That makes a big win. And then using Io-Tahoe with the data automation, and also their search engine. I have the ability to give the data scientists and data analysts the way of they don't need to talk to IT to get accurate information or completely accurate information from the structure. And we'll be right back. >> Yeah, so talking about snowflake and getting up to speed quickly. I know from talking to customers you can get from zero to snowflake very fast and then it sounds like the Io-Tahoe is sort of the automation cloud for your data pipeline within the cloud. Is that the right way to think about it? >> I think so. Right now I have Io-Tahoe attached to my on-prem. And I want to attach it to my off-prem eventually. So I'm using Io-Tahoe data automation right now, to bring in the data, and to start analyzing the data flows to make sure that I'm not missing anything, and that I'm not bringing over redundant data. The data warehouse that I'm working of, it's an on-prem. It's an Oracle Database, and it's 15 years old. So it has extra data in it. It has things that we don't need anymore, and Io-Tahoe's helping me shake out that extra data that does not need to be moved into my S3. So it's saving me money, when I'm moving from off-prem to on-prem. >> And so that was a challenge prior, because you couldn't get the lines of business to agree what to delete, or what was the issue there? >> Oh, it was more than that. Each line of business had their own structure within the warehouse. And then they were copying data between each other, and duplicating the data and using that. So there could be possibly three tables that have the same data in it, but it's used for different lines of business. We have identified using Io-Tahoe identified over seven terabytes in the last two months on data that has just been repetitive. It's the same exact data just sitting in a different schema. And that's not easy to find, if you only understand one schema, that's reporting for that line of business. >> More bad news for the storage companies out there. (both laughs) So far. >> It's cheap. That's what we were telling people. >> And it's true, but you still would rather not waste it, you'd like to apply it to drive more revenue. And so, I guess, let's close on where you see this thing going. Again, I know you're sort of partway through the journey, maybe you could sort of describe, where you see the phase is going and really what you want to get out of this thing, down the road, mid-term, longer term, what's your vision or your data driven organization. >> I want for the bankers to be able to walk around with an iPad in their hand, and be able to access data for that customer, really fast and be able to give them the best deal that they can get. I want Webster to be right there on top with being able to add new customers, and to be able to serve our existing customers who had bank accounts since they were 12 years old there and now our multi whatever. I want them to be able to have the best experience with our bankers. >> That's awesome. That's really what I want as a banking customer. I want my bank to know who I am, anticipate my needs, and create a great experience for me. And then let me go on with my life. And so that follow. Great story. Love your experience, your background and your knowledge. I can't thank you enough for coming on theCube. >> Now, thank you very much. And you guys have a great day. >> All right, take care. And thank you for watching everybody. Keep right there. We'll take a short break and be right back. (gentle music)

Published Date : Jun 23 2020

SUMMARY :

to you by Io-Tahoe. And I'm really excited to of a regional I think and they want to move it relates to kind of transitioning And the only way to do But I want to ask you about Covid, and get the data moving And then finally, you got more clarity. and filled out the right amount. and really great support for the region, and being able to have the experience to be better. to be able to get the customers that know how to get it. and it's to optimize the banking process, and analyzing maybe the backlog was and get that data faster and the people involved, And everyone that who is working is the cultural aspects of this the ability to do search within the data. and you have this virtual cycle. and one of the questions And then I want to go back in the S3 to be able to move it Is that the right way to think about it? and to start analyzing the data flows and duplicating the data and using that. More bad news for the That's what we were telling people. and really what you want and to be able to serve And so that follow. And you guys have a great day. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Paula D'AmicoPERSON

0.99+

PaulaPERSON

0.99+

ConnecticutLOCATION

0.99+

WestchesterLOCATION

0.99+

InformaticaORGANIZATION

0.99+

24 hoursQUANTITY

0.99+

oneQUANTITY

0.99+

13QUANTITY

0.99+

thousandsQUANTITY

0.99+

100 hoursQUANTITY

0.99+

Bob GrahamPERSON

0.99+

iPadCOMMERCIAL_ITEM

0.99+

Webster BankORGANIZATION

0.99+

GE CapitalORGANIZATION

0.99+

first itemQUANTITY

0.99+

AWSORGANIZATION

0.99+

two productsQUANTITY

0.99+

sevenQUANTITY

0.99+

New YorkLOCATION

0.99+

BostonLOCATION

0.99+

three tablesQUANTITY

0.99+

Each lineQUANTITY

0.99+

first setQUANTITY

0.99+

two daysQUANTITY

0.99+

DevOpsTITLE

0.99+

Webster bankORGANIZATION

0.99+

14 yearsQUANTITY

0.99+

over 15 yearsQUANTITY

0.99+

seven carsQUANTITY

0.98+

each projectQUANTITY

0.98+

Friday nightDATE

0.98+

Enterprise Data AutomationORGANIZATION

0.98+

New EnglandLOCATION

0.98+

Io-TahoeORGANIZATION

0.98+

todayDATE

0.98+

Webster's BankORGANIZATION

0.98+

one schemaQUANTITY

0.97+

Fairfield CountyLOCATION

0.97+

OneQUANTITY

0.97+

one customerQUANTITY

0.97+

over seven terabytesQUANTITY

0.97+

SalesforceORGANIZATION

0.96+

bothQUANTITY

0.95+

single sourceQUANTITY

0.93+

one thingQUANTITY

0.93+

US Air ForceORGANIZATION

0.93+

WebsterORGANIZATION

0.92+

S3COMMERCIAL_ITEM

0.92+

Enterprise Data ArchitectureORGANIZATION

0.91+

Io TahoePERSON

0.91+

OracleORGANIZATION

0.9+

15 years oldQUANTITY

0.9+

Io-TahoePERSON

0.89+

12 years oldQUANTITY

0.88+

TableauTITLE

0.87+

four weeksQUANTITY

0.86+

S3 BucketsCOMMERCIAL_ITEM

0.84+

CovidPERSON

0.81+

Data ArchitectureORGANIZATION

0.79+

JSONTITLE

0.79+

Senior Vice PresidentPERSON

0.78+

24 seven development roleQUANTITY

0.77+

last 35 yearsDATE

0.77+

both laughsQUANTITY

0.75+

Io-TahoeTITLE

0.73+

eachQUANTITY

0.72+

loansQUANTITY

0.71+

zeroQUANTITY

0.71+

Paula D'Amico, Webster Bank


 

>> Narrator: From around the Globe, it's theCube with digital coverage of Enterprise Data Automation, and event series brought to you by Io-Tahoe. >> Everybody, we're back. And this is Dave Vellante, and we're covering the whole notion of Automated Data in the Enterprise. And I'm really excited to have Paula D'Amico here. Senior Vice President of Enterprise Data Architecture at Webster Bank. Paula, good to see you. Thanks for coming on. >> Hi, nice to see you, too. >> Let's start with Webster bank. You guys are kind of a regional I think New York, New England, believe it's headquartered out of Connecticut. But tell us a little bit about the bank. >> Webster bank is regional Boston, Connecticut, and New York. Very focused on in Westchester and Fairfield County. They are a really highly rated regional bank for this area. They hold quite a few awards for the area for being supportive for the community, and are really moving forward technology wise, they really want to be a data driven bank, and they want to move into a more robust group. >> We got a lot to talk about. So data driven is an interesting topic and your role as Data Architecture, is really Senior Vice President Data Architecture. So you got a big responsibility as it relates to kind of transitioning to this digital data driven bank but tell us a little bit about your role in your Organization. >> Currently, today, we have a small group that is just working toward moving into a more futuristic, more data driven data warehousing. That's our first item. And then the other item is to drive new revenue by anticipating what customers do, when they go to the bank or when they log in to their account, to be able to give them the best offer. And the only way to do that is you have timely, accurate, complete data on the customer and what's really a great value on offer something to offer that, or a new product, or to help them continue to grow their savings, or do and grow their investments. >> Okay, and I really want to get into that. But before we do, and I know you're, sort of partway through your journey, you got a lot to do. But I want to ask you about Covid, how you guys handling that? You had the government coming down and small business loans and PPP, and huge volume of business and sort of data was at the heart of that. How did you manage through that? >> We were extremely successful, because we have a big, dedicated team that understands where their data is and was able to switch much faster than a larger bank, to be able to offer the PPP Long's out to our customers within lightning speed. And part of that was is we adapted to Salesforce very for we've had Salesforce in house for over 15 years. Pretty much that was the driving vehicle to get our PPP loans in, and then developing logic quickly, but it was a 24 seven development role and get the data moving on helping our customers fill out the forms. And a lot of that was manual, but it was a large community effort. >> Think about that too. The volume was probably much higher than the volume of loans to small businesses that you're used to granting and then also the initial guidelines were very opaque. You really didn't know what the rules were, but you were expected to enforce them. And then finally, you got more clarity. So you had to essentially code that logic into the system in real time. >> I wasn't directly involved, but part of my data movement team was, and we had to change the logic overnight. So it was on a Friday night it was released, we pushed our first set of loans through, and then the logic changed from coming from the government, it changed and we had to redevelop our data movement pieces again, and we design them and send them back through. So it was definitely kind of scary, but we were completely successful. We hit a very high peak. Again, I don't know the exact number but it was in the thousands of loans, from little loans to very large loans and not one customer who applied did not get what they needed for, that was the right process and filled out the right amount. >> Well, that is an amazing story and really great support for the region, your Connecticut, the Boston area. So that's fantastic. I want to get into the rest of your story now. Let's start with some of the business drivers in banking. I mean, obviously online. A lot of people have sort of joked that many of the older people, who kind of shunned online banking would love to go into the branch and see their friendly teller had no choice, during this pandemic, to go to online. So that's obviously a big trend you mentioned, the data driven data warehouse, I want to understand that, but what at the top level, what are some of the key business drivers that are catalyzing your desire for change? >> The ability to give a customer, what they need at the time when they need it. And what I mean by that is that we have customer interactions in multiple ways. And I want to be able for the customer to walk into a bank or online and see the same format, and being able to have the same feel the same love, and also to be able to offer them the next best offer for them. But they're if they want looking for a new mortgage or looking to refinance, or whatever it is that they have that data, we have the data and that they feel comfortable using it. And that's an untethered banker. Attitude is, whatever my banker is holding and whatever the person is holding in their phone, that is the same and it's comfortable. So they don't feel that they've walked into the bank and they have to do fill out different paperwork compared to filling out paperwork on just doing it on their phone. >> You actually do want the experience to be better. And it is in many cases. Now you weren't able to do this with your existing I guess mainframe based Enterprise Data Warehouses. Is that right? Maybe talk about that a little bit? >> Yeah, we were definitely able to do it with what we have today the technology we're using. But one of the issues is that it's not timely. And you need a timely process to be able to get the customers to understand what's happening. You need a timely process so we can enhance our risk management. We can apply for fraud issues and things like that. >> Yeah, so you're trying to get more real time. The traditional EDW. It's sort of a science project. There's a few experts that know how to get it. You can so line up, the demand is tremendous. And then oftentimes by the time you get the answer, it's outdated. So you're trying to address that problem. So part of it is really the cycle time the end to end cycle time that you're progressing. And then there's, if I understand it residual benefits that are pretty substantial from a revenue opportunity, other offers that you can make to the right customer, that you maybe know, through your data, is that right? >> Exactly. It's drive new customers to new opportunities. It's enhanced the risk, and it's to optimize the banking process, and then obviously, to create new business. And the only way we're going to be able to do that is if we have the ability to look at the data right when the customer walks in the door or right when they open up their app. And by doing creating more to New York times near real time data, or the data warehouse team that's giving the lines of business the ability to work on the next best offer for that customer as well. >> But Paula, we're inundated with data sources these days. Are there other data sources that maybe had access to before, but perhaps the backlog of ingesting and cleaning in cataloging and analyzing maybe the backlog was so great that you couldn't perhaps tap some of those data sources. Do you see the potential to increase the data sources and hence the quality of the data or is that sort of premature? >> Oh, no. Exactly. Right. So right now, we ingest a lot of flat files and from our mainframe type of front end system, that we've had for quite a few years. But now that we're moving to the cloud and off-prem and on-prem, moving off-prem, into like an S3 Bucket, where that data we can process that data and get that data faster by using real time tools to move that data into a place where, like snowflake could utilize that data, or we can give it out to our market. Right now we're about we do work in batch mode still. So we're doing 24 hours. >> Okay. So when I think about the data pipeline, and the people involved, maybe you could talk a little bit about the organization. You've got, I don't know, if you have data scientists or statisticians, I'm sure you do. You got data architects, data engineers, quality engineers, developers, etc. And oftentimes, practitioners like yourself, will stress about, hey, the data is in silos. The data quality is not where we want it to be. We have to manually categorize the data. These are all sort of common data pipeline problems, if you will. Sometimes we use the term data Ops, which is sort of a play on DevOps applied to the data pipeline. Can you just sort of describe your situation in that context? >> Yeah, so we have a very large data ops team. And everyone that who is working on the data part of Webster's Bank, has been there 13 to 14 years. So they get the data, they understand it, they understand the lines of business. So it's right now. We could the we have data quality issues, just like everybody else does. But we have places in them where that gets cleansed. And we're moving toward and there was very much siloed data. The data scientists are out in the lines of business right now, which is great, because I think that's where data science belongs, we should give them and that's what we're working towards now is giving them more self service, giving them the ability to access the data in a more robust way. And it's a single source of truth. So they're not pulling the data down into their own, like Tableau dashboards, and then pushing the data back out. So they're going to more not, I don't want to say, a central repository, but a more of a robust repository, that's controlled across multiple avenues, where multiple lines of business can access that data. Is that help? >> Got it, Yes. And I think that one of the key things that I'm taking away from your last comment, is the cultural aspects of this by having the data scientists in the line of business, the lines of business will feel ownership of that data as opposed to pointing fingers criticizing the data quality. They really own that that problem, as opposed to saying, well, it's Paula's problem. >> Well, I have my problem is I have data engineers, data architects, database administrators, traditional data reporting people. And because some customers that I have that are business customers lines of business, they want to just subscribe to a report, they don't want to go out and do any data science work. And we still have to provide that. So we still want to provide them some kind of regiment that they wake up in the morning, and they open up their email, and there's the report that they subscribe to, which is great, and it works out really well. And one of the things is why we purchased Io-Tahoe was, I would have the ability to give the lines of business, the ability to do search within the data. And we'll read the data flows and data redundancy and things like that, and help me clean up the data. And also, to give it to the data analysts who say, all right, they just asked me they want this certain report. And it used to take okay, four weeks we're going to go and we're going to look at the data and then we'll come back and tell you what we can do. But now with Io-Tahoe, they're able to look at the data, and then in one or two days, they'll be able to go back and say, Yes, we have the data, this is where it is. This is where we found it. This is the data flows that we found also, which is what I call it, is the break of a column. It's where the column was created, and where it went to live as a teenager. (laughs) And then it went to die, where we archive it. And, yeah, it's this cycle of life for a column. And Io-Tahoe helps us do that. And we do data lineage is done all the time. And it's just takes a very long time and that's why we're using something that has AI in it and machine running. It's accurate, it does it the same way over and over again. If an analyst leaves, you're able to utilize something like Io-Tahoe to be able to do that work for you. Is that help? >> Yeah, so got it. So a couple things there, in researching Io-Tahoe, it seems like one of the strengths of their platform is the ability to visualize data, the data structure and actually dig into it, but also see it. And that speeds things up and gives everybody additional confidence. And then the other piece is essentially infusing AI or machine intelligence into the data pipeline, is really how you're attacking automation. And you're saying it repeatable, and then that helps the data quality and you have this virtual cycle. Maybe you could sort of affirm that and add some color, perhaps. >> Exactly. So you're able to let's say that I have seven cars, lines of business that are asking me questions, and one of the questions they'll ask me is, we want to know, if this customer is okay to contact, and there's different avenues so you can go online, do not contact me, you can go to the bank and you can say, I don't want email, but I'll take texts. And I want no phone calls. All that information. So, seven different lines of business asked me that question in different ways. One said, "No okay to contact" the other one says, "Customer 123." All these. In each project before I got there used to be siloed. So one customer would be 100 hours for them to do that analytical work, and then another analyst would do another 100 hours on the other project. Well, now I can do that all at once. And I can do those types of searches and say, Yes, we already have that documentation. Here it is, and this is where you can find where the customer has said, "No, I don't want to get access from you by email or I've subscribed to get emails from you." >> Got it. Okay. Yeah Okay. And then I want to go back to the cloud a little bit. So you mentioned S3 Buckets. So you're moving to the Amazon cloud, at least, I'm sure you're going to get a hybrid situation there. You mentioned snowflake. What was sort of the decision to move to the cloud? Obviously, snowflake is cloud only. There's not an on-prem, version there. So what precipitated that? >> Alright, so from I've been in the data IT information field for the last 35 years. I started in the US Air Force, and have moved on from since then. And my experience with Bob Graham, was with snowflake with working with GE Capital. And that's where I met up with the team from Io-Tahoe as well. And so it's a proven so there's a couple of things one is Informatica, is worldwide known to move data. They have two products, they have the on-prem and the off-prem. I've used the on-prem and off-prem, they're both great. And it's very stable, and I'm comfortable with it. Other people are very comfortable with it. So we picked that as our batch data movement. We're moving toward probably HVR. It's not a total decision yet. But we're moving to HVR for real time data, which is changed capture data, moves it into the cloud. And then, so you're envisioning this right now. In which is you're in the S3, and you have all the data that you could possibly want. And that's JSON, all that everything is sitting in the S3 to be able to move it through into snowflake. And snowflake has proven to have a stability. You only need to learn and train your team with one thing. AWS as is completely stable at this point too. So all these avenues if you think about it, is going through from, this is your data lake, which is I would consider your S3. And even though it's not a traditional data lake like, you can touch it like a Progressive or Hadoop. And then into snowflake and then from snowflake into sandbox and so your lines of business and your data scientists just dive right in. That makes a big win. And then using Io-Tahoe with the data automation, and also their search engine. I have the ability to give the data scientists and data analysts the way of they don't need to talk to IT to get accurate information or completely accurate information from the structure. And we'll be right back. >> Yeah, so talking about snowflake and getting up to speed quickly. I know from talking to customers you can get from zero to snowflake very fast and then it sounds like the Io-Tahoe is sort of the automation cloud for your data pipeline within the cloud. Is that the right way to think about it? >> I think so. Right now I have Io-Tahoe attached to my on-prem. And I want to attach it to my off-prem eventually. So I'm using Io-Tahoe data automation right now, to bring in the data, and to start analyzing the data flows to make sure that I'm not missing anything, and that I'm not bringing over redundant data. The data warehouse that I'm working of, it's an on-prem. It's an Oracle Database, and it's 15 years old. So it has extra data in it. It has things that we don't need anymore, and Io-Tahoe's helping me shake out that extra data that does not need to be moved into my S3. So it's saving me money, when I'm moving from off-prem to on-prem. >> And so that was a challenge prior, because you couldn't get the lines of business to agree what to delete, or what was the issue there? >> Oh, it was more than that. Each line of business had their own structure within the warehouse. And then they were copying data between each other, and duplicating the data and using that. So there could be possibly three tables that have the same data in it, but it's used for different lines of business. We have identified using Io-Tahoe identified over seven terabytes in the last two months on data that has just been repetitive. It's the same exact data just sitting in a different schema. And that's not easy to find, if you only understand one schema, that's reporting for that line of business. >> More bad news for the storage companies out there. (both laughs) So far. >> It's cheap. That's what we were telling people. >> And it's true, but you still would rather not waste it, you'd like to apply it to drive more revenue. And so, I guess, let's close on where you see this thing going. Again, I know you're sort of partway through the journey, maybe you could sort of describe, where you see the phase is going and really what you want to get out of this thing, down the road, mid-term, longer term, what's your vision or your data driven organization. >> I want for the bankers to be able to walk around with an iPad in their hand, and be able to access data for that customer, really fast and be able to give them the best deal that they can get. I want Webster to be right there on top with being able to add new customers, and to be able to serve our existing customers who had bank accounts since they were 12 years old there and now our multi whatever. I want them to be able to have the best experience with our bankers. >> That's awesome. That's really what I want as a banking customer. I want my bank to know who I am, anticipate my needs, and create a great experience for me. And then let me go on with my life. And so that follow. Great story. Love your experience, your background and your knowledge. I can't thank you enough for coming on theCube. >> Now, thank you very much. And you guys have a great day. >> All right, take care. And thank you for watching everybody. Keep right there. We'll take a short break and be right back. (gentle music)

Published Date : Jun 4 2020

SUMMARY :

to you by Io-Tahoe. And I'm really excited to of a regional I think and they want to move it relates to kind of transitioning And the only way to do But I want to ask you about Covid, and get the data moving And then finally, you got more clarity. and filled out the right amount. and really great support for the region, and being able to have the experience to be better. to be able to get the customers that know how to get it. and it's to optimize the banking process, and analyzing maybe the backlog was and get that data faster and the people involved, And everyone that who is working is the cultural aspects of this the ability to do search within the data. and you have this virtual cycle. and one of the questions And then I want to go back in the S3 to be able to move it Is that the right way to think about it? and to start analyzing the data flows and duplicating the data and using that. More bad news for the That's what we were telling people. and really what you want and to be able to serve And so that follow. And you guys have a great day. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Paula D'AmicoPERSON

0.99+

PaulaPERSON

0.99+

ConnecticutLOCATION

0.99+

WestchesterLOCATION

0.99+

InformaticaORGANIZATION

0.99+

24 hoursQUANTITY

0.99+

oneQUANTITY

0.99+

13QUANTITY

0.99+

thousandsQUANTITY

0.99+

100 hoursQUANTITY

0.99+

Bob GrahamPERSON

0.99+

iPadCOMMERCIAL_ITEM

0.99+

Webster BankORGANIZATION

0.99+

GE CapitalORGANIZATION

0.99+

first itemQUANTITY

0.99+

AWSORGANIZATION

0.99+

two productsQUANTITY

0.99+

sevenQUANTITY

0.99+

New YorkLOCATION

0.99+

BostonLOCATION

0.99+

three tablesQUANTITY

0.99+

Each lineQUANTITY

0.99+

first setQUANTITY

0.99+

two daysQUANTITY

0.99+

DevOpsTITLE

0.99+

Webster bankORGANIZATION

0.99+

14 yearsQUANTITY

0.99+

over 15 yearsQUANTITY

0.99+

seven carsQUANTITY

0.98+

each projectQUANTITY

0.98+

Friday nightDATE

0.98+

New EnglandLOCATION

0.98+

Io-TahoeORGANIZATION

0.98+

todayDATE

0.98+

Webster's BankORGANIZATION

0.98+

one schemaQUANTITY

0.97+

Fairfield CountyLOCATION

0.97+

OneQUANTITY

0.97+

one customerQUANTITY

0.97+

over seven terabytesQUANTITY

0.97+

SalesforceORGANIZATION

0.96+

bothQUANTITY

0.95+

single sourceQUANTITY

0.93+

one thingQUANTITY

0.93+

US Air ForceORGANIZATION

0.93+

WebsterORGANIZATION

0.92+

S3COMMERCIAL_ITEM

0.92+

Enterprise Data ArchitectureORGANIZATION

0.91+

OracleORGANIZATION

0.9+

15 years oldQUANTITY

0.9+

Io-TahoePERSON

0.89+

12 years oldQUANTITY

0.88+

TableauTITLE

0.87+

four weeksQUANTITY

0.86+

S3 BucketsCOMMERCIAL_ITEM

0.84+

CovidPERSON

0.81+

Data ArchitectureORGANIZATION

0.79+

JSONTITLE

0.79+

Senior Vice PresidentPERSON

0.78+

24 seven development roleQUANTITY

0.77+

last 35 yearsDATE

0.77+

both laughsQUANTITY

0.75+

Io-TahoeTITLE

0.73+

eachQUANTITY

0.72+

loansQUANTITY

0.71+

zeroQUANTITY

0.71+

Amazon cloudORGANIZATION

0.65+

last two monthsDATE

0.65+

Christiaan Brand & Guemmy Kim, Google | Google Cloud Next 2019


 

>> Live from San Francisco. It's the Cube. Covering Google Cloud Next '19. Brought to you by Google Cloud and its ecosystem partners. >> Hey welcome back, everyone, we're your live coverage with the Cube here in San Francisco for Google Cloud Next 2019. I'm John Furrier, my co-host Stu Miniman. I've got two great guests here from Google. Guemmy Kim, who's a group product manager for Google, Google Security Access and Christiaan Brand, Product Manager at Google. Talking about the security key, fallen as your security key and security in general. Thanks for joining us. >> Of course, thanks for having us. >> So, actually security's the hottest topic in Cloud and any world these days, but you guys have innovation and news, so first let's get the news out of the way. All the work, giz, mottos, all of the blogs have picked it up. >> [Christiaan Brand] Right. >> Security key, titan, tell us. >> [Christiaan Brand] Okay, sure. Uh, high votes on Christiaan. So uh, last year and next we introduced the Titan Security Key which is the strongest form of multifactor certification we offer at Google. Uh, this little kind of gizmo protects you against most of the common phishing threats online. We think that's the number one problem these days. About 81% of account breaches was as a result of phishing or bad passwords. So passwords are really becoming a problem. This old man stat uh making sure that not only do you enter your password, you also need to present this little thing at the point in time when you're logging in. But it does something more, this also makes sure that you're interacting with a legitimate website at the point in time when you're trying to log in. Easy for users to fool victim to phishing, because the site looks legitimate, you enter your username and password, bad guy gets all of it. Security key makes sure that you're interacting with a legitimate website and it will not give away it's secrets, without that assurance that you're not interacting with a phishing website. >> [Christiaan Brand] News this week though is saying that these things are really cool and we recommend users use them. Uh, especially if you're like a high-risk individual or maybe an enterprise user or acts sensitive data you know Google call admin. But what we're really doing this week is we are saying "okay this is cool" but the convenience aspect has been a bit lacking right? Uh, I have to carry this with me if I want to sign in. This week we are saying this mobile phone, now also does the exact same thing as the Security Key. Gives you that level of assurance, making sure you're not interacting with the phishing website and the way we do that is by establishing a local Bluetooth link between the device you're signing in on and the mobile phone. It works on any Android N so Android 7 and later devices this week. Uh and essentially all you need is a Google account and a device with Bluetooth capability to make that work. >> Alright, so, we come to a show like this and a lot of people we geek out as like okay what are the security places that we are going to button, the cloud, and all of these environments. We are actually going to talk about something that I think most people understand is okay I don't care what policies and software you put in place, but the actual person actually needs to be responsible and did you think about things? Explain a little bit what you do, and the security pieces that you know individuals need to be thinking about and how you help them and recommend for them that they can be more secure. >> In general, yeah, I think one of the things that we see from talking to real users and customers is that people tend to underestimate the risks that they are under. And so, we've talked to people like people in the admin space or people who are in the political space and other customers of Google cloud. And they are like, why do I even need to protect my account? And like, we actually had to go and do a lot of education to actually show them that they're actually in much higher risk than they think they are. One of the things that we've seen over time, is phishing obviously is one of the most effective ways that people's accounts get compromised and you have over 70% of organizations saying that they have been victims of phishing in the last year. Then the question is, how do we actually then reduce the phishing that's happening? Because at the end of the day, the humans that are in your organization are going to be your weakest link. And over time, I think that the phishers do recognize that and they'll employ very sophisticated techniques and to try to do that. And so what we tried to do on our end is what can we do on from an algorithmic and automatic and machine side to actually catch things that human eye can't catch and Security Key is definitely one of those things. Also employed with a bunch of other like anti-phishing, anti-spear phishing type things that we will do as well. >> This is important because one of the big cloud admin problems has been human misconfiguration. >> Yeah. >> And we've seen that a lot on Amazon S3 Buckets, and they now passed practices for that but this has become just a human problem. Talk about what you guys are doing to help solve that because if I got router, server access I can't, I don't want to be sharing passwords, that's kind of of a past practices but what other tech can I put in place? What are you guys offering to give me some confidence if I'm going to be using Google cloud. >> Yeah well, I think one of the things is that as much as you can educate your workforce to do the right things like do they recognize phishing emails? Do they recognize that uh, you know this email that is coming from somebody who claims is the CEO, isn't and some of these other techniques people are using. Uh again, like there's human fallacy, there's also things that are just impossible for humans to detect. But fortunately, especially with our Cloud Services, we have very advanced techniques that administrators can actually turn on and enforce for all of the users. And this includes everything from advanced, you know malware and phishing detection techniques to things like enforcing security keys across your organization. And so we're giving administrators that power to actually say, it's not actually up to individual users, I'm actually going to put on these much stronger controls and make it available to everybody at my organization. >> And you guys see a lot of data so you have a lot of collective intelligence across a lot of signals. I mean spear phishing is the worst, it's like phishing is hard to solve. >> [Christiaan Brand] If you think about we have a demo over here just a couple of steps to the right here uh, where we take users through kind of what phishing looks like. Uh, we say that over 99.99% of kind of those types of attack will never even make it through right? The problem is spear phishing as you said, when someone is targeting a specific individual at one company. At that point, we might have not seen those signals before uh that's really where something like a Security Key kind of comes in. >> That's totally right. >> [Christiaan Brand] At that very last line of defense and that's basically what we are targeting here that .1% of users. >> Spear phishing is the most effective because it's highly targeted, no patter recognition. >> Yeah >> So question, one of the things I like we are talking about here is we need to make it easier for users to stay secure. You see, too often, it's like we have all these policies in place and use the VPN and it's like uh forget it, I'm going to use my second phone or log in over here or let me take my files over here and work on them over here and oh my gosh I've just bypassed all of the policy we put in place because you know, how do you just fundamentally think about the product needs to be simple, and it needs to be what the user needs not just the corporate security mandate? >> Yeah, I mean that's a great question. At Google we actually try a nearly completely different way of like kind of access to organizational networks. Like, for example Google kind of deprecated the VPN. Right? So for our employees if we want to access data uh on the company network, we don't use VPNs anymore we have something called kind of BeyondCorp that's like more of a kind of overarching principle than a specific technology. Although we see a lot of companies, even at the show this year that doing kind of technology and product based on that principle of zero trust or BeyondCorp. That makes it really easy for users to interact with services wherever they are and it's all based on trust on the endpoint rather than trust on the network, right? What we've seen is data breaches and things happen you know? Malicious software crawls into a network and from that point it has access to all of the crown jewels. What we are trying to say is like nowhere in being at a privilege point in the network gives you any elevated access. The elevated access is in the context that your device has, the fact that is has a screen lock, the fact that it's maybe issued by your corporation, the fact that it's approved, I don't know, the fact that is has drive instruction turned on, uh you know it's coming from a certain you know location. Those are all kind of contextual signals that we use to make up this uh, you know, our installation of BeyondCorp. This is being offered to customers today, Security Keys again, plays a vital part in all of that. Uh, you know there's trust in the end point, but there's also trust in authentication. If the user is really who they say they are, uh and this kind of gives us that elevated level of trust. >> I think this is a modern approach, that I think is worth highlighting because the old days we had a parameter, access methods were simply, you know, access servers authenticated in and you're in. But you nailed, I think the key point which is: If you don't trust anything and you just say everything is not trustworthy, you need multi-factor authentication. Now, this is the big topic in the industry because architecturally you have to be set up for it, culturally you got to buy into it. So kind of two dimensions of complexity, plus you're going down a whole new road. So you guys must do a lot more than just two factor, three factor, you got to imbed it into the phone. It could be facial recognition, it could be your patterns. So talk about what MFA, Multi-factor Authentication, how's it evolving and how fast is MFA evolving? >> Well, I think the point that you brought up earlier, that it actually has to be usable. And when I look at usability, it has to work for both your end users as well as the idea administrators who are uh putting these on for the systems and we look at both. Uh, so that's actually why we are very excited about things like the built in security key that's on your phone that we launched because it actually is that step to saying how can you take the phone that you already have that users are already familiar using, and then put it into this technology that's like super secure and that most users weren't familiar with before. And so it's concepts like that were we try to merry. Uh, that being said, we've also developed other kind of second factors specific for enterprises in the last year. For example, we are looking at things like your employee ID, like how can an organization actually use that were an outside attacker doesn't have access to that kind of information and it helps to keep you secure. So we are constantly looking at, especially for enterprises, like how do we actually do more and more things that are tailored for usability for both support cause, for the IT organization, as well as the end users themselves. >> Maybe just to add to that, I think the technology, security keys, even in the way that it's being configured today which is built into your phone, that's going into the right direction, it's making things easier. But, I think we still think there's a lot that can be done uh to really bring this technology to the end consumer at some point. So, we kind of have our own interval roadmap, we are working towards in making it even easier. So hopefully, by the time we sit here next year, we can share some more innovations on how this has just become part of everyday life for most users, without them really realizing it. >> More aware of all brain waves, whatever. >> Full story. Yup, yup, yup. >> One of the things that really I think struck a cord with a lot of people in the Keynote was Google Cloud's policy on privacy. Talk about, you on your data, we don't uh you know, some might look and say well uh I'm familiar with some of the consumer you know, ads and search and things like that. And if I think about the discussion of security as a corporate employee is oh my gosh they're going to track everything I am doing, and monitoring everything I need to have my privacy but I still want to be secure. How do you strike that balance and product and working with customers to make sure that they're not living in some authoritarian state, where every second they're monitored? >> That's a good question. Kim if you want to take that, if not I'm happy to do. >> Go ahead. >> Alright, so that is a great question. And I think this year we've really try to emphasize that point and take it home. Google has a big advertising business as everyone knows. We are trying to make the point this year, to say that these two things are separate. If you bring your data to Google Cloud, it's your data, you put that in there. The only way that data would kind of be I guess used is with the terms of service that you signed up for. And those terms of service states: it's your data, it'll be access the way that you want it to be access. And we are going one step further with access transparency this year alright. We have known something where we say well even if a Google user or Googler or Google employee needs access to that data on your behalf, lets say you have a problem with storage buckets, right, something is corrupted. You call uh support and say hey please help me fix this. There will be a near real time log that you can look at which will tell you every single access and basically this is the technology uh we've had in production for quite some time internally at Google. If someone needs to look... >> Look at the data. >> Right, exactly right if I need to look at some you know customers data, because they followed the ticket and there's some problem. These things are stringently long, access is extremely oriented, it's not that someone can just go in and look at data anywhere and the same thing applies to Cloud. It has always applied to Cloud but this year we are exposing that to the user in these kind of transparency reports making sure that the user is absolutely aware of who's accessing their data and for which reason. >> And that's a trust issue as well, it's not just using the check and giving them the benefit... >> [Christiaan Brand] Absolutely. >> But it's basically giving them a trust equation saying look they'll be no God handle access... >> Right, right, exactly. >> You heard with Uber and these other stories that are on the web, and that's huge for you guys. I mean internally just you guys are hardcore on this and you hear this all the time. >> Yeah uh >> Separate building, Sunnyvale... >> No, not separate building. But you know uh, so I've worked in privacy as well for a number of years and I'm actually very proud like as a company I feel like we actually have pushed the floor front on how privacy principles actually should be applied to the technology uh and for examples we have been working very collaboratively with regulators around the world, cause their interest is in protecting the businesses and the citizens kind of for their various countries. And uh we definitely have a commitment to make sure that you know, whether it's organization's or individuals like their privacy actually is protected, the data is secure, and certainly the whole process of how we develop products at Google like there's definitely privacy checkpoints in place so that we're doing the right thing with that data. >> Yeah, I can say I've been following Google for a long time. You guys sometimes got a bad rep because it's easy to attack Google and you guys to a great job with privacy. You pay attention to it and you have the technology, you don't just kind of talk about it. You actually implement it and you dog food it as to or you eat and drink your own champagne. I mean that's how bore became, started became Kubernetes you know? And Spanner was internal first and then became out here. This is the trend that Google, the same trend that you guys are doing with the phones, testing it out internally to see if it works. >> Yeah, yeah. >> Absolutely right, so Security Keys will start there like we uh Krebs published an article last year, just before the event saying we had zero incidents of possible phishers with Googlers since they deploying the technology. We had this inside Google for a long time, and it was kind of born out of necessity, right. We knew there was positive phishing was a problem, even Googlers fall for this kind of thing. It's impossible to train your users not to fall for this type of scam, it just is right. We can view any location all we want, but in the end like we need technology to better protect the user, even your employees. So that's were we started deploying this technology, then we said we want to go one step further. We want to kind of implement this on the mobile phone, so we've been testing this technology internally uh for quite a few months. Uh, kind of making sure that things are shaping out. We released this new beta this week uh so it's not a J product quite yet. Uh, you know as you know there is Bluetooth, there is Chrome, there is Android, there's quite a few things involved. Android Ecosystem is kind of a little bit fragmented, right, there is many OEMs. We want to make this technology available to everyone, everyone who has an Android phone, so we are kind of working on the last little things but we think the technology is in a pretty good place after doing this "drinking of champagne." >> So it's got to be bulletproof. So now, on the current news just to get back to the current news, the phone, the Android phone that has a security key is available or is it data that is available? >> [Christiaan Brand] So it's interesting. In on the Cloud side, the way that we normally launch products there is we do an alpha, which is kind of like a closed liked selection. The moment that we move and do beta, beta is open, anyone can deploy it but it has certain like terms of service limitation and other things. Which says hey don't rely on this as your sole way of accessing an account. For example, if you happening to try and sign in on a device that doesn't have Bluetooth the technology clearly will not work. So we're saying please make sure you have a backup, please keep a physical security key for the time being. But start using this technology, we think for the most popular platforms it should be well shaken out. But beta is more of a designation that we kind of reserve for saying we're starting... >> You're setting expectations. >> But also, one thing I want to clarify that just because it's in beta it doesn't mean it less secure. The worst thing that will happen is that you can be locked out of your account because you know, the Bluetooth could fail to communicate or other things like that. So I want to assure people, even though it's beta you can use it, your account is secure. >> Google has the beta kind of uh which means you either take it out to a select group of people or set expectations on terms of service. >> Right. >> Just to kind of keep an eye on it. But just to clarify, which phones again are available for the Android? >> [Christiaan Brand] Uh, we wanted to make sure that we cover as large a population as possible, so we kind of have to look at the trade offs, you know at which point in time we make this available going forward. Uh, we wanted to make sure that we cover more than 50% of the Android devices out there today. That level that we wanted to reach, kind of coincided with the Android 7, Android Nougat, is kind of the line that we've drawn. Anything Android 7 and above, it doesn't have to be a Pixel phone, it doesn't have to a Nexus phone, it doesn't have to a Samsung phone, any phone 7 and up should work with the technology. Uh and there's a little special treat for folks that have a Pixel 3 as you alluded to earlier we have the Titan M chip that we announced last year in Pixel. There we actually make use of this cryptographic chip but on other devices you have the same technology and you have the same assurance. >> Well certainly an exciting area both on from a device standpoint, everybody loves to geek out on the new phones as Google I know is coming up I'm sure it'll be a fun time to talk about that. But overall, on Cloud security is number one, access, human, errors, fixing those, automating, a very important area. So we're going to be keeping track of what's going on, thanks for coming on. >> Thanks. >> And sharing your insight, I appreciate it. >> Of course, thanks for having us. >> Okay, live Cube coverage here in San Francisco. More after this short break. Here Day 3 of 3 days of wall-to-wall coverage. I'm John Furrier and Stu Miniman, stay with us, we'll be back after this short break. (energetic music)

Published Date : Apr 11 2019

SUMMARY :

Brought to you by Google Cloud Talking about the security key, and news, so first let's get the news out of the way. against most of the and the way we do that is and the security pieces that you know the things that we see from talking of the big cloud admin problems Talk about what you guys are doing to help enforce for all of the users. And you guys see a lot of data At that point, we might have not seen we are targeting here that .1% of users. Spear phishing is the most effective of the policy we put in place because in the network gives you any elevated access. the old days we had a parameter, and it helps to keep you secure. So hopefully, by the time we sit here next year, One of the things that really Kim if you want to take that, that you want it to be access. and the same thing applies to Cloud. and giving them the benefit... But it's basically giving them and that's huge for you guys. to make sure that you know, that you guys are doing with the phones, but in the end like we need technology So now, on the current news just that we kind of reserve for saying that you can be locked out of your account Google has the beta kind of uh for the Android? Android Nougat, is kind of the line that we've drawn. it'll be a fun time to talk about that. And sharing your insight, I'm John Furrier and Stu Miniman,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GoogleORGANIZATION

0.99+

UberORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Guemmy KimPERSON

0.99+

3 daysQUANTITY

0.99+

last yearDATE

0.99+

San FranciscoLOCATION

0.99+

SamsungORGANIZATION

0.99+

John FurrierPERSON

0.99+

Pixel 3COMMERCIAL_ITEM

0.99+

next yearDATE

0.99+

zero incidentsQUANTITY

0.99+

Android 7TITLE

0.99+

second phoneQUANTITY

0.99+

KimPERSON

0.99+

This weekDATE

0.99+

AmazonORGANIZATION

0.99+

AndroidTITLE

0.99+

ChromeTITLE

0.99+

Android NougatTITLE

0.99+

more than 50%QUANTITY

0.99+

over 70%QUANTITY

0.99+

this yearDATE

0.99+

this weekDATE

0.99+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

Android NTITLE

0.98+

ChristiaanORGANIZATION

0.98+

NexusCOMMERCIAL_ITEM

0.98+

two great guestsQUANTITY

0.98+

firstQUANTITY

0.98+

two factorQUANTITY

0.98+

todayDATE

0.97+

.1%QUANTITY

0.97+

PixelCOMMERCIAL_ITEM

0.97+

three factorQUANTITY

0.97+

GooglerORGANIZATION

0.96+

over 99.99%QUANTITY

0.96+

About 81%QUANTITY

0.95+

SunnyvaleLOCATION

0.94+

second factorsQUANTITY

0.94+

CloudTITLE

0.94+

Day 3QUANTITY

0.93+

two thingsQUANTITY

0.93+

Frans Coppus, Driessen HRM | Nutanix .NEXT EU 2018


 

Live from London England, it's the CUBE covering .NEXT Conference Europe 2018 brought to you by Nutanix. >> Welcome Back to the CUBE here from London. Our reporting of Nutanix NEXT 2018 in Europe. With me here is Frans Coppus. You are an ICT manager at  Driessen? I'm very curious. Driessen is a customer of Nutanix? I understand that you develop HRM software among other things? >> Tell me about Driessen.  How does this work? >> Yes, well Driessen is a family business. We are a business service provider for the public sector in the Netherlands and the Driessen Group is actually a group of companies that make employment possible. We do that through the offering  of several different services. You should think of connecting people to work, so a staffing function, but next to this , we also develop software and services to take over processes for other companies or to make processes easier. >> That sounds a bit like on one hand you are a Employment Placement company, helping people get work, but on the other hand, you also seem to do something with software and the delivery of your services as a software product. How does that work? >> Yes, that's right. We deliver services to make other companies' processes easier.  You should think of payroll and things like that, but also all other kinds of processes for which we mainly use the digital services that we develop ourselves. For example, think of  a package like AFAS profit , where AFAS profit falls short on some functionality that customers would want to make use of. We can  help those customers to provide that extra functionality to improve processes. >> Yeah, that sounds like you are software development shop. You develop the software in-house? >> Tell me more about that.  Do you do this on-premise? Do you use the cloud? What tools do your developers use? How does that work? >> Well, have a team of about 25 in-house software developers.  They are spread across a number of our different companies , and the software we develop runs  partially on prem  and partially also in the cloud. >> Yes, and I understand that you have been doing this with Nutanix for a year, year and a half to provide a foundation for your infrastructure. Can you explain how this works?  What Nutanix products and services you use? What are some of the benefits? >> Well, we started looking into modernization  of our data center at the beginning of last year. That was how it started. Then we looked further into things. We already had some interest in Nutanix. We did some more research and  ultimately we  decided to choose Nutanix and basically slowly replace our entire data center  with Nutanix. So we installed some hardware  but subsequently we also selected AHV as the hypervisor layer. We came from VMWare so we basically migrated everything. I must say that the implementation itself went very quickly. The implementation of the Nutanix environment  was really a piece of cake and then we started to migrate our VMs to the platform one-by-one.  And this year we completed this process. Currently, our entire data center is running on Nutanix.  What were the problems you were hoping to solve? Well, you should mainly think about scalability.  We liked the fact  that we could start small with Nutanix  but when needed we could scale easily. Performance was an issue in the previous environment, which we also completely resolved. I think the biggest challenge we had was to make things easier. We had created a pretty complex landscape over the years. That was actually the main reason why we ultimately chose for Nutanix. Simplification of the whole landscape. Easy to manage, especially also  since we are using a mixed solution. Partially on- prem and partially in the cloud. With Nutanix this is easy to manage. >> Yeah, exactly.  Since you are an ICT manager. I can imagine that your role also changes? I assume that at first, the main  focus was on infrastructure, as it was difficult and where attention was needed. How has your role changed over the course of time? >> Yeah, that's exactly right. That role is changing.  Initially, you are very focused on the operation to keep all the "balls in the air." All sorts of things you actually don't want to have to deal with.  And this is what we are now seeing. We are able to manage the environment with fewer people.  That means you free up more time and together with the management team,  you can use this this to look into how we can improve our services How can we improve our availability?  And all of this at equal or lower cost and with less effort. >> Yeah, and I assume, to use the word " digital transformation", is also a challenge for you? You want to move closer to your customer.  How do you do that as an IT department? How how move closer to the business internally at Driessen, but also external customers? How does that work? >>Well, the needs of the customer is often translated by the Business to the software developers. What is important for us is the time-to-market. The development life cycle is pretty rapid. We work a lot on the basis of orders and as such it often goes paired with requirements that we need to adhere to. So, time to market is very important in such cases. With Nutanix we are actually able to deploy software faster and offer new features to our software engineers  who in turn can use this. >> Yes, so you are saying that your software developers can thus get closer to the business. They require less time to lay the groundwork, as it were.  We are here at .NEXT, we have watched the  keynotes, heard a lot announcements. Nutanix started as an infrastructure. A so-called modernization of what you had. Meanwhile, there are 15 products. It has become much more gigantic. When you look at the growth of the amount of people walking around here, 3,500 people.  I am curious, how are you looking at this? You will be walking around here for a few more days. You've watched the keynotes. You see the crowds. What is your impression of the event? >> Well I must say, "very cool!" Last year I went to Nice, That was a very good conference. That was also the reason that made me think "I coming back this year for sure". During the first keynote, it was really cool to see, how much bigger the entire  event has become but also the success of Nutanix. Last year, in Nice, I spoke with  some of my peers who were still 't doubt whether they would transition to Nutanix. Well, I told him about our experiences and told them I would recommend it for sure including the use of AHV as hypervisor.   You are starting to feel how everything has matured. So much more has been added. I was impressed with what products  I have seen over the last two days along with the simplicity and maturity of the products  Really super cool to see.  What really stuck with me.   What really impressed me was Frame. Frame is really super cool. It's also something we are for sure looking at to use. In addition, Beam looks very appealing. I must honestly say, we now have our entire data center on prem. Also our DR environment is on prem, because when we made the decision, there was no Beam. If I would have to make the decision again, I would absolutely choose Beam to help solve DR. There too, the simplicity with which you can manage it is really cool to see. Well, in the future we continue  to monitor such developments and I am sure that we will work with  products such as Beam and Frame in the future. >> The made the announcement of the core product. The core products to essentials, which is a bit of the uplift. Those are the next small steps you can take. And then you get enterprise. Thats where you are especially finding the new product offerings such  SaaS products , the Xi Cloud , and what I am curious about is the following. I also know from Nutanix from the perspective of infrastructure? I have seen them grow. And looking at all the announcements they have made. All those products they have developed  What was for you the lightbulb moment?  The moment where you thought "when I get home after the weekend,  I am going to use this?" I want to learn more about this!"  What is that one product from which you say  I want to get started with that!" >> I think , if I had to choose it, then I would say, "I will definitely get started with Frame"   to look at how we can provide our colleagues with a workplace when they work remote or things like that. >> Yes, >> Is also one of the issues that you are trying to solve using Nutanix? Traditionally, Nutanix did lots of VDI. Still does a lot of VDI. Is that something that the Driessen Group is moving towards? >> Yeah, well at least for a part of our colleaguesI, I see ways to implement Frame as a substitute for a VDI environment. >> Yes. Yes. Absolutely. Exactly. Yes. Exactly right.  >> Also, I was really...  and I did not realize that  they were working on this, but Nutanix  is building its own Cloud I am very curious what this will bring.  Especially if this will seamlessly integrate with your on prem environment. At the moment, I find that to be the strength of Nutanix?  The fact that you can  you can easily switch between on your own prem Nutanix environment or a cloud environment. Well, if there is also another Nutanix in the Cloud option, that would be cool. Exactly. >> All right, last question. You employ developers  Today, we also saw some announcements  during the keynote around cloud-native as it is called so nicely So Karbon, databases in the Cloud with Era with Buckets, S3, S3 storage. Are these things from which you think,  "my developers will make use of this?" Yes. Yes. My developers are all knocking on the door.  They want to  get started with containers and other stuff. So that's very good to hear that Nutanix is also diligently working on that and how it will integrate within Nutanix.  So my software developers will be very happy with that. >> Yeah, great!  Well congratulations! That really sounds like a top store!. A very nice story about Driessen. how you are using Nutanix. Well, I wish you success with your next steps that you will undoubtedly take. That was it for now.  Thanks for watching the Cube together with Frans here in London Til next time.

Published Date : Nov 30 2018

SUMMARY :

brought to you by Nutanix. I understand that you How does this work? and the Driessen Group is actually software and the delivery of your services that we develop ourselves. Yeah, that sounds like you Do you do this on-premise? , and the software we What are some of the benefits? I must say that the implementation itself went very quickly. I assume that at first, the main on the operation to keep all the "balls in the air." Yeah, and I assume, to use the word " Well, the needs of the customer is often translated by the Business I am curious, how are you looking at this? I have seen over the last two days along with the simplicity and maturity of the products Those are the next small steps you to look at how we can provide our colleagues with a workplace Is that something that the Driessen Group is moving towards? Yeah, well at least for a part of our At the moment, I find that to be the strength of So Karbon, databases in the Cloud Well, I wish you success with your next

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NutanixORGANIZATION

0.99+

Frans CoppusPERSON

0.99+

Frans CoppusPERSON

0.99+

DriessenORGANIZATION

0.99+

Driessen GroupORGANIZATION

0.99+

Last yearDATE

0.99+

15 productsQUANTITY

0.99+

NetherlandsLOCATION

0.99+

EuropeLOCATION

0.99+

LondonLOCATION

0.99+

FransPERSON

0.99+

3,500 peopleQUANTITY

0.99+

NiceLOCATION

0.99+

London EnglandLOCATION

0.99+

a yearQUANTITY

0.99+

this yearDATE

0.98+

oneQUANTITY

0.98+

DriessenPERSON

0.98+

one productQUANTITY

0.97+

last yearDATE

0.97+

first keynoteQUANTITY

0.96+

VMWareTITLE

0.96+

TodayDATE

0.96+

about 25 in-house software developersQUANTITY

0.95+

year and a halfQUANTITY

0.93+

NutanixTITLE

0.91+

CUBEORGANIZATION

0.9+

2018DATE

0.86+

firstQUANTITY

0.84+

AHVORGANIZATION

0.84+

.NEXT Conference Europe 2018EVENT

0.79+

KarbonORGANIZATION

0.74+

BeamTITLE

0.71+

FrameORGANIZATION

0.7+

Xi CloudTITLE

0.7+

last two daysDATE

0.68+

BeamORGANIZATION

0.64+

AFASTITLE

0.62+

AHVTITLE

0.61+

S3TITLE

0.61+

CubeTITLE

0.55+

EULOCATION

0.51+

NEXT 2018EVENT

0.48+

Era with BucketsTITLE

0.41+

premORGANIZATION

0.4+

Bala Kuchibhotla and Greg Muscarella | Nutanix .NEXT EU 2018


 

>> Live from London, England, it's theCUBE covering .Next Conference Europe 2018. Brought to you by Nutanix. >> Welcome back to theCUBE's coverage of Nutanix .Next 2018 here in London, England. We're gonna be talking about developers in this segment. I'm Stu Miniman and my cohost is Joep Piscaer. Happy to welcome to the program two first time guests, Bala Kuchibhotla is the General Manager of Nutanix Era, and sitting next to him is Greg Muscarella who recently joined Nutanix, is Vice President of Products at Nutanix. Both of you been up on stage, Greg was talking about Carbon and cloud native, and of course Era is the databases of service. Gentlemen, thanks so much for joining us. >> Thank you, thank you. >> Good to be here. >> Alright, so look, developers. You know, we were thinking back, you know, I love the old meme, developers, developers, developers! Balmer had it right, and style might not have been there. Microsoft, company that does quite well with developers. You know, my background is in the enterprise space. I'm an infrastructure guy that goes to cloud, and the struggle I've had a little bit is, you know, developers really work from the application down. It's like that's where they live, and as an infrastructure guy, it's a little uncomfortable for me. So maybe to set that stage, because you know I look at Nutanix, you know, at it's core, infrastructure's a big piece of it, but its distributed architectures, it's built from the architecture from like really the hyper-scale type of environments. So help connect the dots as to where Nutanix plays with the developers, and then we'll get into your products and everything else after. Bala, you want to start? >> Cool, okay. So as you know, Nutanix is definitely addressing the IT ops market. We cannot simply its storage, compute, networking, and build the infrastructure as service. Obviously if you look at the private cloud, the IT operators are becoming the cloud operators and then giving them to the developers. We are basically trying to build a cloud for IT operators so they can present the cloud to developer. Now that we have this infrastructure pretty much there for quite some time, we're not expanding the services to other things, the platform, the platform as service. Now going back to the developer community, you will have the same kind of cloud-like consumption. That these cloud operators, the IT operators are providing the cloud for you. US developers get the same kind of public cloud consumption. They lack ability, that the ability you are trying to do, easy tools, (mumbling), and S3s, that kind of stuff, EBS, you have the same kind of APS for our Nutanix that you can spin up a VM, spin up a database, spin up a storage and then do what you want to do kind of stuff. So that's the natural journey for that kind of stuff. >> Yeah, Greg? >> Yeah, I have to agree. Look, the world has changed quite a bit for developers, and it's gotten a lot better. If you look at the tooling and what you can now do on your laptop and spinning up what would be a pretty complex environment from a three tier application with a robust database, an app tier, anything else you might have on the storage side, spin it up, break it down, and with your CICD pipeline you can have it deployed to production pretty rapidly. So we look at doing is, you know, recreating that experience that the cloud has really brought to those developers and having the same type of tooling for those enterprise-grade applications that are going to be deployed, you know, on that infrastructure that is needed in private data centers. >> So looking at, you know, one of the reasons why developers love cloud services so much, it's easy for them. They can just consume it, it's very low friction. They don't even really, you know, need to go through a purchasing process, other than credit card maybe paid for themselves in the beginning. So you know, low friction is really the key word here. So I'm wondering, you know, looking at the Nutanix, the IT ops perspective, how are you kinda bring that low friction into the developer world? >> Yeah, so I'll take the question. So essentially what I am seeing is the world in the enterprise world is very fragmented. People doing silos kind of stuff. As you rightly said, developers really want to be liberated from all this bureaucracy, right? So they really need a service kind of world where they can go click on it, they get their compute kind of stuff. There's a pressure on the IT ops to give that experience, otherwise people will flee to public a lot. As simple as that, right? So to me, the way I see is the IT ops, the DB ops, the traditional DB ops inner ring, they are understanding the need that, hey well, we gotta be service-ified. We want to provide that kind of service-like interface to our teams who are consuming that kinda stuff. So this software, Nutanix as the enterprise cloud software, lets them create their own private cloud and then give those services to the developers kinda stuff. So it's a natural transition as a company for us. We got to start from the cloud operators, now we're exposing the cloud services from the cloud operators to the cloud consumers. Essentially the developers. >> Greg, up on stage you talked about cloud native, and your premise is that cloud native is a term for a methodology, not necessarily that it's born in the cloud. Maybe help explain that a little bit, and you know, we think Nutanix is mostly in data centers today, so, you know, why isn't this just saying, "No, no, no, we can be cloud native, too." >> Fair point, and I think we're not alone in that as well, in being an enterprise infrastructure company that was looking at enabling cloud native applications, our cloud native architecture within the private data center Say look, really it's a form of doing distributed computing, right, and that's the core to it, right? So you have a stateless, ephemeral infrastructure. You're not upgrading things, you know, you're blowing it away and rebuilding it. There's some core things like that, that will move across whether it be in the cloud or on prem. And of course you need tooling for that, right, 'cause that's not the methodology most enterprise developers or operators are really going through, right, so everything's pets, not much cattle. We're really trying to change that quite a bit, and that's both enabling technology but it's also the practices that people will deploy. And we're seeing is, it's not so much us trying to sell this it's more like hey, we're used to this in the cloud, why can't we do this on prem in our private data center where we have all of our data, and the other services that we need to interact with, like, that's where the demand's really coming from. So it's that mass of data they want to interact with with the type of architecture that they've gotten used to for rapid development and deployment. >> So one other thing, you mentioned pets versus cattle. One of the things I've been seeing from, you know, an IT ops perspective is you need a good ecosystem of management products around your pets or your cattle to be able to make it cattle, right? If you don't have the tooling, you're gonna do manual interaction, and it's going to become pets. So I'm wondering, you know, in that cloud native space, how are you helping the IT ops to actually make it a cattle experience, and you know, towards management or monitoring, or backup stuff like that? >> So, you know, a lot of that is surrounded around Kubernetes, right, as a center of mass. So it's not just us doing it, it's us pulling in a lot of the support and ecosystem that is being built by the community for that and leveraging that piece. And then we have other things we'll either add onto that as it integrates with our platform and some of the capabilities there, or things that we may do, just again, pure open source. Give you a couple examples of that, so I mentioned Epoch on stage, right, so it's sort of something that brings additional metrics to Prometheus. So in addition to CPU and memory storage consumption, you're actually getting latency and other more business metrics that you might be using to trigger things in Kubernetes, like auto-scaling. I don't necessarily always scale on CPU or memory, maybe it's a customer experience that's difficult to measure The other thing is because we have the storage layer underneath, you know, we look at doing things like, again it's early in Kubernetes, but snapshotting from within Kubernetes. Right, so if we have a CSI provider, why not from within Kubernetes let an application or a container trigger a snapshot. Underneath our storage layer will take that snap and then it becomes an object that's available from within Kubernetes. So there's a whole lot of things happening. >> I just want to add a couple of comments to that. This pets versus cattle is standardization, right, like we're talking about it. In typical, old legacy enterprises there are let's take the example of databases. Like, every application team has their own databases they are trying to pass, they're all trying to do management around it kind of stuff. When we do a couple of servers, like we looked at around 2,400 databases for a typical company, they have 400 different configurations of the software. And so like this is one of the biggest companies that we talking about kind of stuff. With that kind of stuff they cannot manage cloud, obviously. This is not no more a cattle kind of stuff. But how do you bring that kind of standardization, right? That is where the Era as a product is actually coming into this. We are trying to standardize, but when you try to standardize these database environments for on premise enterprise cloud, you have to do it at their terms. What I meant to try to say is when you try to go for public cloud, you have this catalog 11204 pull the node to PSE5, you can only create databases with whatever the software the public cloud guys are doing it. But on premise needs are slightly different. So that is where Nutanix, Era, and this products will come into. We allow to people to create the cloud, and then we allow them to create their own catalog of software that they can standardize. So that is what I call standardization at their customer terms, that's what we're trying. >> And let me add to that, though. It also brings in this convenience, 'cause not only is it coming up with standardize, but we've made it even more convenient, right, because now a developer can go provision their own database, they're gonna get a standard configuration for what that is, and so you made it easier for developers and you're getting something that is more cattle-like. >> Bala, I think you're in a good seat to be able to actually give us a little bit of independent commentary, you know. The movement of databases is one of the hottest topics in the industry. I haven't seen whether Andy Jassy was sparing back with Larry Ellison, you know, at re:Invent this week, but you know, we've been watching the growth of things like Postgres, and lot of these changes, you know, Era sits clearly in that space. So what do you seeing from customers, you know, the modernization of applications is, you know, what I call the long pole in the tent. It's the toughest thing for me to be able to do. I said we usually want to first, you know, you modernize your platform, Nutanix helps with that, public cloud helps with that, and then I can modernize my application. You know, database tends to be, it's the stickiest application that we have in the industry. So what are you seeing? >> Yeah, so there are two class of applications that we see. This space is completely green field We are starting off completely. People love cloud-like experience and cloud native databases that's where the public cloud can kind of try to help them. But if you see 70 to 80% of the money still is with all the traditional apps. You're trying to now cloudify them. The cloud native stack that we talk about, the cloud native database, is not going to the game. Like you really need to think about how do you kind of take these big, giant databases that are there with Oracles, and DBTools, that kind of stuff but give the cloud-like experience, right? So the actually very difficult game for any public cloud, that's why you don't see rack provisioning and a dot list is still not there, or even if JCP natively. Oracle does that but little bit difficult. Data gravity forces people to come to on premise, that's my humble take on this, right. But how do you build, how do you make this gray area I call it a brown field, and convert them into more of a consumer-centered kind of stuff? That's where Era actually tries to play. It has two roles that, if you have existing databases, we turn to kind of convert them into more of a cloud-like databases for you, or if you have a green field then we can get you directly onto the cloud native experience. Or if you're trying to migrate from technology to other technology, definitely we would like to help. These are the three things that we try to do through Era kinda of stuff, yeah. >> So looking forward, you know, we're starting out with databases, you know, making that simple, making that small so that there's less friction in that. So maybe a question for Greg, so what's the future for Nutanix in, you know, enabling other services, other cloud-like services on a Nutanix platform going forward? >> In addition to databases. >> Exactly. >> Yeah, so we're a big proponent of standard APIs, as I talked about, right, so we have that in storage for a long time, that makes things easy with databases. We have a standard client talking to standard database backends. As we see other core building blocks, those are the kind of things that we're gonna want to build and deliver as well. So S3 is a defacto standard for object storage, for instance, so people are following that. You'll get Pub/Sub with Kafka APIs, Druid. There's a whole bunch of things, especially from the Apache project, that have become sort of defacto standards, so really it's like, okay, well which building blocks are needed by developers to build these applications that they want, and how do we really work the the community to establish those as open standards. 'Cause we really want, you know, I talked about the portability quite a bit. So we don't want anyone locked into our stack or anyone else's stack, it's like hey, let's build with the best toolkits, let's use standard, open APIs, and then developers get what they need which is portability, or run the application where they want to run it. So that's our strategy of going forward. >> Into some-I-tab we have easy to equal end, which is AHV, we have EBS equal end, we have our called Acropolis Block Services. We have S3 equal end, which is called Buckets, we have database RDS equal end, we have Era, and now we are going with content as which we call Carbon. So we are trying to kind of look at those critical services for anyone, especially for developers, to say that man, it's all ecosystem, it's not like one piece, single piece It's not this compute, it's not this storage, but it is an ecosystem of services that we need to kind of predict. >> Want to just come back to what we were talking beginning, the relationship with developers. How much of what Nutanix does is really kind of the IT ops that then enables developers, and how much direct developer engagement is it? Like, you know, is there development activity here at the conference going on that we should know about? I know that Nutanix goes to a lot of the developer shows. But maybe if you could give us some commentary on that. >> Yeah, I can start that, it's a path, right? So currently we certainly have the bulk of our interactions are gonna be on the IT operations side, and so it's only through them, because their customers are the developers that we really interact primarily today. But you should see that changing quite a bit, and I think that you'll that with the tools that we're providing directly to developers to interact with you know, through the APIs like they have Era. So for instance, if IT has deployed Era internally, then if I want a database I can go straight to those APIs or command line to grab those things. And you'll see that continuously be a trend as we let developers interact directly with our products. >> Just to give you an example, right, within the company, within Nutanix, we are drinking our own champaign, right. So we are operating a private cloud and we are exposing our APIs to all our developers. Today, if someone wants a database in Nutanix, they go to a control plane and say I want a database. Right, that's the API. How the infrastructure is getting, it's a means to an end for them, right. That's where we are going with our customers, too, hey, here is how you build your private cloud, here is how you expose all your service end points for different services, and your developers just need to enjoy them. And then there's a building aspect of it, that's the nuance that private clouds need to deal with. How do they charge the developers, how do they charge meter, that kind of stuff that people will talk about today. >> You know, I definitely heard when I talked to all the product teams, especially everything in Zai cloud, you know, extensibility with APIs is built into everything you're doing. So we're going to have to leave it there. Greg, we're gonna be catching up with you and the Nutanix team in two weeks at the Cube-Con show in Seattle. So thanks so much for joining us. Bala, pleasure, thanks for giving us all the update. And thank you, we're gonna be back with more coverage here. From Nutanix .Next 2018 in London, I'm Stu Miniman and Joep Piscaer is my cohost. Going to be do a Dutch session in a second, so be sure to stay with that. First foreign language interview on theCUBE, and thank you for watching. (electronic music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Nutanix. Both of you been up on stage, Greg was talking and the struggle I've had a little bit is, you know, They lack ability, that the ability you are trying to do, that are going to be deployed, you know, So I'm wondering, you know, looking at the Nutanix, There's a pressure on the IT ops to give that experience, Maybe help explain that a little bit, and you know, right, and that's the core to it, right? One of the things I've been seeing from, you know, So, you know, a lot of that is surrounded around pull the node to PSE5, you can only create and so you made it easier for developers the modernization of applications is, you know, a green field then we can get you So looking forward, you know, we're starting out 'Cause we really want, you know, I talked and now we are going with content as which we call Carbon. Like, you know, is there development activity are the developers that we really interact primarily today. that's the nuance that private clouds need to deal with. Greg, we're gonna be catching up with you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bala KuchibhotlaPERSON

0.99+

GregPERSON

0.99+

Greg MuscarellaPERSON

0.99+

NutanixORGANIZATION

0.99+

Joep PiscaerPERSON

0.99+

Andy JassyPERSON

0.99+

Stu MinimanPERSON

0.99+

70QUANTITY

0.99+

LondonLOCATION

0.99+

Larry EllisonPERSON

0.99+

SeattleLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

TodayDATE

0.99+

London, EnglandLOCATION

0.99+

two rolesQUANTITY

0.99+

single pieceQUANTITY

0.99+

one pieceQUANTITY

0.99+

BothQUANTITY

0.99+

todayDATE

0.99+

OracleORGANIZATION

0.99+

three thingsQUANTITY

0.98+

400 different configurationsQUANTITY

0.98+

OraclesORGANIZATION

0.98+

BalaPERSON

0.98+

80%QUANTITY

0.98+

PostgresORGANIZATION

0.98+

around 2,400 databasesQUANTITY

0.98+

Acropolis Block ServicesORGANIZATION

0.98+

BalmerPERSON

0.97+

Cube-ConEVENT

0.97+

oneQUANTITY

0.97+

bothQUANTITY

0.97+

KubernetesTITLE

0.97+

EBSORGANIZATION

0.96+

USLOCATION

0.96+

S3TITLE

0.95+

EraORGANIZATION

0.95+

DutchOTHER

0.95+

two weeksQUANTITY

0.95+

OneQUANTITY

0.95+

DruidTITLE

0.94+

firstQUANTITY

0.94+

EraTITLE

0.94+

CarbonORGANIZATION

0.91+

Nutanix EraORGANIZATION

0.9+

11204OTHER

0.9+

2018DATE

0.89+

EpochORGANIZATION

0.85+

two classQUANTITY

0.85+

PrometheusTITLE

0.83+

three tier applicationQUANTITY

0.82+

PSE5TITLE

0.81+

First foreignQUANTITY

0.81+

theCUBEORGANIZATION

0.81+

ApacheORGANIZATION

0.81+

two first time guestsQUANTITY

0.8+

this weekDATE

0.8+

Frans Coppus, Driessen HCM | Nutanix .NEXT EU 2018


 

Live from London England, it's the CUBE covering .NEXT Conference Europe 2018 brought to you by Nutanix. >> Welcome Back to the CUBE here from London. Our reporting Nutanix NEXT 2018 in Europe. Next to me is Frans Coppus. You are a manager at ICT Driessen? I'm very curious. Driessen, customer Nutanix? I understand you among other software make HRM? >> Tell me about Driessen. How does that work? How does that work? >> Yeah, uh well Driessen is a family business. We are a business service provider for the public sector in the Netherlands The Driessen Group is actually a group of companies that make work possible. We do that through the offering  of several different services. You should think of connecting people to work, so a staffing function, but next to this , we also develop software and services to take over processes for other companies or to make processes easier. >> That sounds a bit like you're on the edge. On the one hand you are a Employment Placement company, helping people get work, but on the other hand, you seem to do something with software and delivering your services as a software product. How does that work? >> Yeah, and do We indeed. That's right. We deliver services to other processes make companies easier. Think of payroll and things like that, but also all kinds of other processes and that's what  we mainly use the digital services and we develop these ourselves. For example, you should think of  a package like AFAS profit , where AFAS profit falls short in some functionality , but which customers  would like to make use of. We can we who help these customers  to provide that extra functionality to improve processes. >> Yeah, that sounds like you are software development house. you develops yes the software. >> That's right. >> How about that? Do your on-premises? If you do in the cloud? Where working with your developers? How does that work? How does that work? >> Well, we do it with a club of about 25 software developers we in private service to have. Spread across a number of different companies we have, and the software we Developing running Indeed, partly on prem and partly also in the cloud. >> Yes I understand that you do for a year or half do with Nutanix such as underlay for your infrastructure. Can you explain how how together is how the services which Nutanix products you use? What advantages do you have it? >> Well, we indeed the beginning of last year we look at our data center to actually modernize. That was the rise. When we have oriented ourselves. We already had some interest in Nutanix. Are there going deeper into deepen and finally we indeed decided to limit to Nutanix choose that. To actually the entire data center, we had slowly going to replace by Nutanix. Um, so we are there put down a piece of hardware, but then also chosen as the AHV hypervisor layer. We came from VMWare. We have it all petted or migrated to the implementation itself completely very quickly should say importing t soup boiler and was really a piece of cake and Then we started to one for our VMs to migrate to the platform. Uh, and that we have this year we found rounded. Currently running our entire data center running on actually uh uh on Nutanix indeed. Yes, because what were the problems you hoping to solve? of And, then you should think about a particular piece Scalability is not it? So for example we fine with Nutanix in any case, could reasonably small start, but if necessary, uh easy to be scales. Performance was an issue on the old surroundings. We actually have completely resolved. I think the biggest uh what we the biggest challenge we had was to make it easier. We had Yes quite a complex landscape been built up over the years. Uh and um, well that was actually the main why we express sible for Nutanix have chosen. Yes, simplification of the whole landscape. Easy to manage, especially since we thus actually have a mixed environment. Deel where I click ofthe cloud? Uh, well that's fine with Nutanix to manage, so eh. >> Yeah, exactly. I imagine when he hey you are IT manager. I can imagine your role uh too changed huh? First it was take I to really focus on infrastructure, What was difficult was that many friction. Um, what's your role in the course of changing time? >> Yeah, no, that's exactly right. That role is changing. uh in Initially at very busy to focus after the operation. To put it all in keep air. Uh sorts of things you actually yes it sounds I think you would not actually working with it wants to keep. Uhm, uhm, and we now see. We see Now just that with fewer people and a much more simple way that environment can manage. That means you some more time for free, and the time, even trying especially uh to stop uh along with the business see how we can provide our services improve? How can we availability improve? And say to equal or less cost and with less effort. >> Yeah, because I assume that you have to code word to use some digital transformation that I take for you are also an issue. Yes. You can also just wants to more to move the client. How do you do that if like, hey if IT department? How how you slide closer against the business and Driessen itself but also to the customer? How does that work with you? >> Uh well, uh, let's say, the customer needs to of course translated into the business Go to frequent the software developers. So what really us is very important is the time-to-market. Development course is very fast. We work a lot on the basis of Procurement and tendering often various demands we put than we meet to come. Yes. So, time to market is very important that, uh, that's why we uh um with Nutanix able to actually faster to deploy new features to provide direction our software developers then with them to get started. >> Yeah, yeah, because you say your software developers can thus closer So sit closer to that business. That requiring less time to UH to lay the groundwork, as it were. Um, I'm looking for, they not here .NEXT, uh we have the keynotes seen a lot announcements. Nutanix started as if modernization of infrastructure of What you had here. Meanwhile, are 15 products. It has become much more gigantic. If you people around here are looking grown. 3500 people, so therefore I am a bit like it? How do you doing that? Do you walk here too a few days around. You've seen the keynotes. You see the crowds. What is your impression of the event? >> Well I must say, very cool eh, I'm I last year in Nice, eh it was a very good conference. That was the reason I was thinking of now, I'm going this year definitely return. It was really cool to see the first keynote, how much greater it has now become, the whole event, but also the success of Nutanix. I uh, I spoke last year in Nice yet some of my peers still 't doubt was whether they would over Steps to Nutanix. Well I told him what our experiences were with it. And uh, and said, I it can definitely recommend. Also say the Using the AHV as Hypervisor. In the meantime brand just, it's so much matured. Uh uh, there's so much more added. I was really what really impressed me over the last two days have seen all new products and adulthood and the simplicity of such products. Yes. Really super cool to see, uh, what I was really stuck, I really of was impressed, was particularly Frame. Frame is uh uh uh uh really super cool. That is also something we definitely presently to look for to use it. In addition, Beam is something that very appealing. I must say, we have now uh uh uh uh all say data center on prem. So Also my DR environment we have on prem, because when we made the decision, there was no Beam. Yes, if I would again to choose, I would absolutely sure choose the DR uh using to solve beam. There too, the simplicity with which you can manage. Uh that's really cool to see. Well, we will in the future ensure that species continue to follow developments and uh I know sure that in the future to work uh continue with products such as a beam and a frame for example. >> Yeah, because what you see uh huh, they the announcement made by the core product. Heh, the core of the core products to essentials, which is a bit of the uplift heh? Those are the following small steps you can convert, yes, and then you get enterprise. Yes. There are now especially the really new projects uh Xi SaaS products de Xi Cloud and uh, and I am very curious to now is look I also know from Nutanix heh from that perspective? Infrastructure, and I have seen them grow. And watching all the announcements they done. All those products they ge done. What would really be for you the, you know, What was with you the light that went so you say yes I'll go you know when I uh home After the weekend, here I'm going to stroke. Here I would like to know more. What is the one product that you now say, I really want to get to work? >> I think if I had to choose it, then I would say, then I'm going to frame me definitely started to look at how we can put that to say uh uh uh uh our employees easier by a work to provide when they for instance remote work or things like that. >> Yes, is also one of the uh the issues which you who wants you solve by Nutanix Heh? Traditionally, did Nutanix many VDI. Still does much VDI. Is that something that uh, where you go when Driessen? >> Yeah, well at least for a part of our I'm sure a staff uh uh ways to deal deploy Frame say as a substitute for a VDI environment yes. >> Yes. Yes. Absolutely. Exactly. Yes. Exactly right. Uhm. >> And also I was really huh, and I did not think they were doing, but I understood so which uh Nutanix now we actually their own cloud is building. Yes. Yes that I am very curious what that is going to bring. Surely as say, seamlessly integrates with your back on prem omgeving. I actually find that to be the strength of this time of Nutanix heh? The that you you can switch easily between on your own prem Nutanix environment or a cloud environment. Yes. Well, if there is still a uh a Nutanix variation in the Cloud comes in, yes it is uh totally cool. Exactly. Yes. Exactly. Yes. >> Last >> though demand. You have of course developers in dienst. We have today also in the keynote various announcements seen around cloud-native as nice hot. Heh? So Karbon, databases in the Cloud with Era with Buckets, S3, S3 storage. Uh, these are also things that you think of, hey, that my developers will also get to work? Yes. Yes. mac we stand on all to knock on the door. Who want to containers to work and that kind Affairs , Uh uh uh so that's very good to hear that Also there say Nutanix fully is doing, and how it integrates within uh Nutanix, so uh, yes, there will my Software developers will be very happy with it. Yes. >> Yeah, great! but congratulations. That sounds like really a top story. A very nice story about Driessen. how you Using Nutanix. Well, I wish you success with the following to step. Thank you. Which undoubtedly UH will come. uhm. And that was it for UH for now. Thanks for look at the Cube Together with Frans herein in London uh, and until next time.

Published Date : Nov 29 2018

SUMMARY :

brought to you by Nutanix. I understand you How does that work? or to make processes easier. you seem to do something with to provide that extra functionality to improve processes. Yeah, that sounds like you to have. What advantages do you have it? Easy to manage, especially since we I to really focus on infrastructure, to stop uh along with the business against the business and Driessen itself but also to the customer? So, time to market is very important Yeah, yeah, because you say your software sure that in the future to work What is the one product that you now say, if I had to choose it, then I would Is that something that uh, where you go when Driessen? I'm sure a staff uh I actually find that to be the strength of this to knock on the door. to step.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LondonLOCATION

0.99+

NutanixORGANIZATION

0.99+

Frans CoppusPERSON

0.99+

Frans CoppusPERSON

0.99+

NiceLOCATION

0.99+

15 productsQUANTITY

0.99+

EuropeLOCATION

0.99+

FransPERSON

0.99+

NetherlandsLOCATION

0.99+

DriessenORGANIZATION

0.99+

Driessen GroupORGANIZATION

0.99+

DriessenPERSON

0.99+

last yearDATE

0.99+

London EnglandLOCATION

0.99+

first keynoteQUANTITY

0.99+

todayDATE

0.98+

FirstQUANTITY

0.98+

this yearDATE

0.98+

a yearQUANTITY

0.97+

about 25 software developersQUANTITY

0.97+

oneQUANTITY

0.96+

one productQUANTITY

0.95+

3500 peopleQUANTITY

0.95+

AHVORGANIZATION

0.92+

ICT DriessenORGANIZATION

0.92+

2018DATE

0.91+

halfQUANTITY

0.89+

.NEXT Conference Europe 2018EVENT

0.83+

CUBEORGANIZATION

0.82+

KarbonORGANIZATION

0.81+

NutanixTITLE

0.8+

last two daysDATE

0.74+

XiORGANIZATION

0.73+

CloudTITLE

0.66+

prem omgevingLOCATION

0.61+

S3COMMERCIAL_ITEM

0.6+

VMWareTITLE

0.58+

SaaSTITLE

0.56+

AFASTITLE

0.55+

UHORGANIZATION

0.54+

HypervisorORGANIZATION

0.5+

BeamORGANIZATION

0.48+

NEXT 2018EVENT

0.46+

Era withTITLE

0.45+

EULOCATION

0.41+

BucketsORGANIZATION

0.34+

macORGANIZATION

0.31+

Nutanix .Next | NOLA | Day 1 | AM Keynote


 

>> PA Announcer: Off the plastic tab, and we'll turn on the colors. Welcome to New Orleans. ♪ This is it ♪ ♪ The part when I say I don't want ya ♪ ♪ I'm stronger than I've been before ♪ ♪ This is the part when I set your free ♪ (New Orleans jazz music) ("When the Saints Go Marching In") (rock music) >> PA Announcer: Ladies and gentleman, would you please welcome state of Louisiana chief design officer Matthew Vince and Choice Hotels director of infrastructure services Stacy Nigh. (rock music) >> Well good morning New Orleans, and welcome to my home state. My name is Matt Vince. I'm the chief design office for state of Louisiana. And it's my pleasure to welcome you all to .Next 2018. State of Louisiana is currently re-architecting our cloud infrastructure and Nutanix is the first domino to fall in our strategy to deliver better services to our citizens. >> And I'd like to second that warm welcome. I'm Stacy Nigh director of infrastructure services for Choice Hotels International. Now you may think you know Choice, but we don't own hotels. We're a technology company. And Nutanix is helping us innovate the way we operate to support our franchisees. This is my first visit to New Orleans and my first .Next. >> Well Stacy, you're in for a treat. New Orleans is known for its fabulous food and its marvelous music, but most importantly the free spirit. >> Well I can't wait, and speaking of free, it's my pleasure to introduce the Nutanix Freedom video, enjoy. ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ Ah, ah, ♪ ♪ Ah, ah, ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I'm free, I'm free, I'm free, I'm free ♪ ♪ Gritting your teeth, you hold onto me ♪ ♪ It's never enough, I'm never complete ♪ ♪ Tell me to prove, expect me to lose ♪ ♪ I push it away, I'm trying to move ♪ ♪ I'm desperate to run, I'm desperate to leave ♪ ♪ If I lose it all, at least I'll be free ♪ ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> PA Announcer: Ladies and gentlemen, please welcome chief marketing officer Ben Gibson ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> Welcome, good morning. >> Audience: Good morning. >> And welcome to .Next 2018. There's no better way to open up a .Next conference than by hearing from two of our great customers. And Matthew, thank you for welcoming us to this beautiful, your beautiful state and city. And Stacy, this is your first .Next, and I know she's not alone because guess what It's my first .Next too. And I come properly attired. In the front row, you can see my Nutanix socks, and I think my Nutanix blue suit. And I know I'm not alone. I think over 5,000 people in attendance here today are also first timers at .Next. And if you are here for the first time, it's in the morning, let's get moving. I want you to stand up, so we can officially welcome you into the fold. Everyone stand up, first time. All right, welcome. (audience clapping) So you are all joining not just a conference here. This is truly a community. This is a community of the best and brightest in our industry I will humbly say that are coming together to share best ideas, to learn what's happening next, and in particular it's about forwarding not only your projects and your priorities but your careers. There's so much change happening in this industry. It's an opportunity to learn what's coming down the road and learn how you can best position yourself for this whole new world that's happening around cloud computing and modernizing data center environments. And this is not just a community, this is a movement. And it's a movement that started quite awhile ago, but the first .Next conference was in the quiet little town of Miami, and there was about 800 of you in attendance or so. So who in this hall here were at that first .Next conference in Miami? Let me hear from you. (audience members cheering) Yep, well to all of you grizzled veterans of the .Next experience, welcome back. You have started a movement that has grown and this year across many different .Next conferences all over the world, over 20,000 of your community members have come together. And we like to do it in distributed architecture fashion just like here in Nutanix. And so we've spread this movement all over the world with .Next conferences. And this is surging. We're also seeing just today the current count 61,000 certifications and climbing. Our Next community, close to 70,000 active members of our online community because .Next is about this big moment, and it's about every other day and every other week of the year, how we come together and explore. And my favorite stat of all. Here today in this hall amongst the record 5,500 registrations to .Next 2018 representing 71 countries in whole. So it's a global movement. Everyone, welcome. And you know when I got in Sunday night, I was looking at the tweets and the excitement was starting to build and started to see people like Adile coming from Casablanca. Adile wherever you are, welcome buddy. That's a long trip. Thank you so much for coming and being here with us today. I saw other folks coming from Geneva, from Denmark, from Japan, all over the world coming together for this moment. And we are accomplishing phenomenal things together. Because of your trust in us, and because of some early risk candidly that we have all taken together, we've created a movement in the market around modernizing data center environments, radically simplifying how we operate in the services we deliver to our businesses everyday. And this is a movement that we don't just know about this, but the industry is really taking notice. I love this chart. This is Gartner's inaugural hyperconvergence infrastructure magic quadrant chart. And I think if you see where Nutanix is positioned on there, I think you can agree that's a rout, that's a homerun, that's a mic drop so to speak. What do you guys think? (audience clapping) But here's the thing. It says Nutanix up there. We can honestly say this is a win for this hall here. Because, again, without your trust in us and what we've accomplished together and your partnership with us, we're not there. But we are there, and it is thanks to everyone in this hall. Together we have created, expanded, and truly made this market. Congratulations. And you know what, I think we're just getting started. The same innovation, the same catalyst that we drove into the market to converge storage network compute, the next horizon is around multi-cloud. The next horizon is around whether by accident or on purpose the strong move with different workloads moving into public cloud, some into private cloud moving back and forth, the promise of application mobility, the right workload on the right cloud platform with the right economics. Economics is key here. If any of you have a teenager out there, and they have a hold of your credit card, and they're doing something online or the like. You get some surprises at the end of the month. And that surprise comes in the form of spiraling public cloud costs. And this isn't to say we're not going to see a lot of workloads born and running in public cloud, but the opportunity is for us to take a path that regains control over infrastructure, regain control over workloads and where they're run. And the way I look at it for everyone in this hall, it's a journey we're on. It starts with modernizing those data center environments, continues with embracing the full cloud stack and the compelling opportunity to deliver that consumer experience to rapidly offer up enterprise compute services to your internal clients, lines of businesses and then out into the market. It's then about how you standardize across an enterprise cloud environment, that you're not just the infrastructure but the management, the automation, the control, and running any tier one application. I hear this everyday, and I've heard this a lot already this week about customers who are all in with this approach and running those tier one applications on Nutanix. And then it's the promise of not only hyperconverging infrastructure but hyperconverging multiple clouds. And if we do that, this journey the way we see it what we are doing is building your enterprise cloud. And your enterprise cloud is about the private cloud. It's about expanding and managing and taking back control of how you determine what workload to run where, and to make sure there's strong governance and control. And you're radically simplifying what could be an awfully complicated scenario if you don't reclaim and put your arms around that opportunity. Now how do we do this different than anyone else? And this is going to be a big theme that you're going to see from my good friend Sunil and his good friends on the product team. What are we doing together? We're taking all of that legacy complexity, that friction, that inability to be able to move fast because you're chained to old legacy environments. I'm talking to folks that have applications that are 40 years old, and they are concerned to touch them because they're not sure if they can react if their infrastructure can meet the demands of a new, modernized workload. We're making all that complexity invisible. And if all of that is invisible, it allows you to focus on what's next. And that indeed is the spirit of this conference. So if the what is enterprise cloud, and the how we do it different is by making infrastructure invisible, data centers, clouds, then why are we all here today? What is the binding principle that spiritually, that emotionally brings us all together? And we think it's a very simple, powerful word, and that word is freedom. And when we think about freedom, we think about as we work together the freedom to build the data center that you've always wanted to build. It's about freedom to run the applications where you choose based on the information and the context that wasn't available before. It's about the freedom of choice to choose the right cloud platform for the right application, and again to avoid a lot of these spiraling costs in unanticipated surprises whether it be around security, whether it be around economics or governance that come to the forefront. It's about the freedom to invent. It's why we got into this industry in the first place. We want to create. We want to build things not keep the lights on, not be chained to mundane tasks day by day. And it's about the freedom to play. And I hear this time and time again. My favorite tweet from a Nutanix customer to this day is just updated a lot of nodes at 38,000 feed on United Wifi, on my way to spend vacation with my family. Freedom to play. This to me is emotionally what brings us all together and what you saw with the Freedom video earlier, and what you see here is this new story because we want to go out and spread the word and not only talk about the enterprise cloud, not only talk about how we do it better, but talk about why it's so compelling to be a part of this hall here today. Now just one note of housekeeping for everyone out there in case I don't want anyone to take a wrong turn as they come to this beautiful convention center here today. A lot of freedom going on in this convention center. As luck may have it, there's another conference going on a little bit down that way based on another high growth, disruptive industry. Now MJBizCon Next, and by coincidence it's also called next. And I have to admire the creativity. I have to admire that we do share a, hey, high growth business model here. And in case you're not quite sure what this conference is about. I'm the head of marketing here. I have to show the tagline of this. And I read the tagline from license to launch and beyond, the future of the, now if I can replace that blank with our industry, I don't know, to me it sounds like a new, cool Sunil product launch. Maybe launching a new subscription service or the like. Stay tuned, you never know. I think they're going to have a good time over there. I know we're going to have a wonderful week here both to learn as well as have a lot of fun particularly in our customer appreciation event tonight. I want to spend a very few important moments on .Heart. .Heart is Nutanix's initiative to promote diversity in the technology arena. In particular, we have a focus on advancing the careers of women and young girls that we want to encourage to move into STEM and high tech careers. You have the opportunity to engage this week with this important initiative. Please role the video, and let's learn more about how you can do so. >> Video Plays (electronic music) >> So all of you have received these .Heart tokens. You have the freedom to go and choose which of the four deserving charities can receive donations to really advance our cause. So I thank you for your engagement there. And this community is behind .Heart. And it's a very important one. So thank you for that. .Next is not the community, the moment it is without our wonderful partners. These are our amazing sponsors. Yes, it's about sponsorship. It's also about how we integrate together, how we innovate together, and we're about an open community. And so I want to thank all of these names up here for your wonderful sponsorship of this event. I encourage everyone here in this room to spend time, get acquainted, get reacquainted, learn how we can make wonderful music happen together, wonderful music here in New Orleans happen together. .Next isn't .Next with a few cool surprises. Surprise number one, we have a contest. This is a still shot from the Freedom video you saw right before I came on. We have strategically placed a lucky seven Nutanix Easter eggs in this video. And if you go to Nutanix.com/freedom, watch the video. You may have to use the little scrubbing feature to slow down 'cause some of these happen quickly. You're going to find some fun, clever Easter eggs. List all seven, tweet that out, or as many as you can, tweet that out with hashtag nextconf, C, O, N, F, and we'll have a random drawing for an all expenses paid free trip to .Next 2019. And just to make sure everyone understands Easter egg concept. There's an eighth one here that's actually someone that's quite famous in our circles. If you see on this still shot, there's someone in the back there with a red jacket on. That's not just anyone. We're targeting in here. That is our very own Julie O'Brien, our senior vice president of corporate marketing. And you're going to hear from Julie later on here at .Next. But Julie and her team are the engine and the creativity behind not only our new Freedom campaign but more importantly everything that you experience here this week. Julie and her team are amazing, and we can't wait for you to experience what they've pulled together for you. Another surprise, if you go and visit our Freedom booths and share your stories. So they're like video booths, you share your success stories, your partnerships, your journey that I talked about, you will be entered to win a beautiful Nutanix brand compliant, look at those beautiful colors, bicycle. And it's not just any bicycle. It's a beautiful bicycle made by our beautiful customer Trek. I actually have a Trek bike. I love cycling. Unfortunately, I'm not eligible, but all of you are. So please share your stories in the Freedom Nutanix's booths and put yourself in the running, or in the cycling to get this prize. One more thing I wanted to share here. Yesterday we had a great time. We had our inaugural Nutanix hackathon. This hackathon brought together folks that were in devops practices, many of you that are in this room. We sold out. We thought maybe we'd get four or five teams. We had to shutdown at 14 teams that were paired together with a Nutanix mentor, and you coded. You used our REST APIs. You built new apps that integrated in with Prism and Clam. And it was wonderful to see this. Everyone I talked to had a great time on this. We had three winners. In third place, we had team Copper or team bronze, but team Copper. Silver, Not That Special, they're very humble kind of like one of our key mission statements. And the grand prize winner was We Did It All for the Cookies. And you saw them coming in on our Mardi Gras float here. We Did It All for Cookies, they did this very creative job. They leveraged an Apple Watch. They were lighting up VMs at a moments notice utilizing a lot of their coding skills. Congratulations to all three, first, second, and third all receive $2,500. And then each of them, then were able to choose a charity to deliver another $2,500 including Ronald McDonald House for the winner, we did it all for the McDonald Land cookies, I suppose, to move forward. So look for us to do more of these kinds of events because we want to bring together infrastructure and application development, and this is a great, I think, start for us in this community to be able to do so. With that, who's ready to hear form Dheeraj? You ready to hear from Dheeraj? (audience clapping) I'm ready to hear from Dheeraj, and not just 'cause I work for him. It is my distinct pleasure to welcome on the stage our CEO, cofounder and chairman Dheeraj Pandey. ("Free" by Broods) ♪ Hallelujah, I'm free ♪ >> Thank you Ben and good morning everyone. >> Audience: Good morning. >> Thank you so much for being here. It's just such an elation when I'm thinking about the Mardi Gras crowd that came here, the partners, the customers, the NTCs. I mean there's some great NTCs up there I could relate to because they're on Slack as well. How many of you are in Slack Nutanix internal Slack channel? Probably 5%, would love to actually see this community grow from here 'cause this is not the only even we would love to meet you. We would love to actually do this in a real time bite size communication on our own internal Slack channel itself. Now today, we're going to talk about a lot of things, but a lot of hard things, a lot of things that take time to build and have evolved as the industry itself has evolved. And one of the hard things that I want to talk about is multi-cloud. Multi-cloud is a really hard problem 'cause it's full of paradoxes. It's really about doing things that you believe are opposites of each other. It's about frictionless, but it's also about governance. It's about being simple, and it's also about being secure at the same time. It's about delight, it's about reducing waste, it's about owning, and renting, and finally it's also about core and edge. How do you really make this big at a core data center whether it's public or private? Or how do you really shrink it down to one or two nodes at the edge because that's where your machines are, that's where your people are? So this is a really hard problem. And as you hear from Sunil and the gang there, you'll realize how we've actually evolved our solutions to really cater to some of these. One of the approaches that we have used to really solve some of these hard problems is to have machines do more, and I said a lot of things in those four words, have machines do more. Because if you double-click on that sentence, it really means we're letting design be at the core of this. And how do you really design data centers, how do you really design products for the data center that hush all the escalations, the details, the complexities, use machine-learning and AI and you know figure our anomaly detection and correlations and patter matching? There's a ton of things that you need to do to really have machines do more. But along the way, the important lesson is to make machines invisible because when machines become invisible, it actually makes something else visible. It makes you visible. It makes governance visible. It makes applications visible, and it makes services visible. A lot of things, it makes teams visible, careers visible. So while we're really talking about invisibility of machines, we're talking about visibility of people. And that's how we really brought all of you together in this conference as well because it makes all of us shine including our products, and your careers, and your teams as well. And I try to define the word customer success. You know it's one of the favorite words that I'm actually using. We've just hired a great leader in customer success recently who's really going to focus on this relatively hard problem, yet another hard problem of customer success. We think that customer success, true customer success is possible when we have machines tend towards invisibility. But along the way when we do that, make humans tend towards freedom. So that's the real connection, the yin-yang of machines and humans that Nutanix is really all about. And that's why design is at the core of this company. And when I say design, I mean reducing friction. And it's really about reducing friction. And everything we do, the most mundane of things which could be about migrating applications, spinning up VMs, self-service portals, automatic upgrades, and automatic scale out, and all the things we do is about reducing friction which really makes machines become invisible and humans gain freedom. Now one of the other convictions we have is how all of us are really tied at the hip. You know our success is tied to your success. If we make you successful, and when I say you, I really mean Main Street. Main Street being customers, and partners, and employees. If we make all of you successful, then we automatically become successful. And very coincidentally, Main Street and Wall Street are also tied in that very same relation as well. If we do a great job at Main Street, I think the Wall Street customer, i.e. the investor, will take care of itself. You'll have you know taken care of their success if we took care of Main Street success itself. And that's the narrative that our CFO Dustin Williams actually went and painted to our Wall Street investors two months ago at our investor day conference. We talked about a $3 billion number. We said look as a company, as a software company, we can go and achieve $3 billion in billings three years from now. And it was a telling moment for the company. It was really about talking about where we could be three years from now. But it was not based on a hunch. It was based on what we thought was customer success. Now realize that $3 billion in pure software. There's only 10 to 15 companies in the world that actually have that kind of software billings number itself. But at the core of this confidence was customer success, was the fact that we were doing a really good job of not over promising and under delivering but under promising starting with small systems and growing the trust of the customers over time. And this is one of the statistics we actually talk about is repeat business. The first dollar that a Global 2000 customer spends in Nutanix, and if we go and increase their trust 15 times by year six, and we hope to actually get 17 1/2 and 19 times more trust in the years seven and eight. It's very similar numbers for non Global 2000 as well. Again, we go and really hustle for customer success, start small, have you not worry about paying millions of dollars upfront. You know start with systems that pay as they grow, you pay as they grow, and that's the way we gain trust. We have the same non Global 2000 pay $6 1/2 for the first dollar they've actually spent on us. And with this, I think the most telling moment was when Dustin concluded. And this is key to this audience here as well. Is how the current cohorts which is this audience here and many of them were not here will actually carry the weight of $3 billion, more than 50% of it if we did a great job of customer success. If we were humble and honest and we really figured out what it meant to take care of you, and if we really understood what starting small was and having to gain the trust with you over time, we think that more than 50% of that billings will actually come from this audience here without even looking at new logos outside. So that's the trust of customer success for us, and it takes care of pretty much every customer not just the Main Street customer. It takes care of Wall Street customer. It takes care of employees. It takes care of partners as well. Now before I talk about technology and products, I want to take a step back 'cause many of you are new in this audience. And I think that it behooves us to really talk about the history of this company. Like we've done a lot of things that started out as science projects. In fact, I see some tweets out there and people actually laugh at Nutanix cloud. And this is where we were in 2012. So if you take a step back and think about where the company was almost seven, eight years ago, we were up against giants. There was a $30 billion industry around network attached storage, and storage area networks and blade servers, and hypervisors, and systems management software and so on. So what did we start out with? Very simple premise that we will collapse the architecture of the data center because three tier is wasteful and three tier is not delightful. It was a very simple hunch, we said we'll take rack mount servers, we'll put a layer of software on top of it, and that layer of software back then only did storage. It didn't do networks and security, and it ran on top of a well known hypervisor from VMware. And we said there's one non negotiable thing. The fact that the design must change. The control plane for this data center cannot be the old control plane. It has to be rethought through, and that's why Prism came about. Now we went and hustled hard to add more things to it. We said we need to make this diverse because it can't just be for one application. We need to make it CPU heavy, and memory heavy, and storage heavy, and flash heavy and so on. And we built a highly configurable HCI. Now all of them are actually configurable as you know of today. And this was not just innovation in technologies, it was innovation in business and sizing, capacity planning, quote to cash business processes. A lot of stuff that we had to do to make this highly configurable, so you can really scale capacity and performance independent of each other. Then in 2014, we did something that was very counterintuitive, but we've done this on, and on, and on again. People said why are you disrupting yourself? You know you've been doing a good job of shipping appliances, but we also had the conviction that HCI was not about hardware. It was about a form factor, but it was really about an operating system. And we started to compete with ourselves when we said you know what we'll do arm's length distribution, we'll do arm's length delivery of products when we give our software to our Dell partner, to Dell as a partner, a loyal partner. But at the same time, it was actually seen with a lot of skepticism. You know these guys are wondering how to really make themselves vanish because they're competing with themselves. But we also knew that if we didn't compete with ourselves someone else will. Now one of the most controversial decisions was really going and doing yet another hypervisor. In the year 2015, it was really preposterous to build yet another hypervisor. It was a very mature market. This was coming probably 15 years too late to the market, or at least 10 years too late to market. And most people said it shouldn't be done because hypervisor is a commodity. And that's the word we latched on to. That this commodity should not have to be paid for. It shouldn't have a team of people managing it. It should actually be part of your overall stack, but it should be invisible. Just like storage needs to be invisible, virtualization needs to be invisible. But it was a bold step, and I think you know at least when we look at our current numbers, 1/3rd of our customers are actually using AHV. At least every quarter that we look at it, our new deployments, at least 35% of it is actually being used on AHV itself. And again, a very preposterous thing to have said five years ago, four years ago to where we've actually come. Thank you so much for all of you who've believed in the fact that virtualization software must be invisible and therefore we should actually try out something that is called AHV today. Now we went and added Lenovo to our OEM mix, started to become even more of a software company in the year 2016. Went and added HP and Cisco in some of very large deals that we talk about in earnings call, our HP deals and Cisco deals. And some very large customers who have procured ELAs from us, enterprise license agreements from us where they want to mix and match hardware. They want to mix Dell hardware with HP hardware but have common standard Nutanix entitlements. And finally, I think this was another one of those moments where we say why should HCI be only limited to X86. You know this operating systems deserves to run on a non X86 architecture as well. And that gave birth to this idea of HCI and Power Systems from IBM. And we've done a great job of really innovating with them in the last three, four quarters. Some amazing innovation that has come out where you can now run AIX 7.x on Nutanix. And for the first time in the history of data center, you can actually have a single software not just a data plane but a control plane where you can manage an IBM farm, an Power farm, and open Power farm and an X86 farm from the same control plane and have you know the IBM farm feed storage to an Intel compute farm and vice versa. So really good things that we've actually done. Now along the way, something else was going on while we were really busy building the private cloud, we knew there was a new consumption model on computing itself. People were renting computing using credit cards. This is the era of the millennials. They were like really want to bypass people because at the end of the day, you know why can't computing be consumed the way like eCommerce is? And that devops movement made us realize that we need to add to our stack. That stack will now have other computing clouds that is AWS and Azure and GCP now. So similar to the way we did Prism. You know Prism was really about going and making hypervisors invisible. You know we went ahead and said we'll add Calm to our portfolio because Calm is now going to be what Prism was to us back when we were really dealing with multi hypervisor world. Now it's going to be multi-cloud world. You know it's one of those things we had a gut around, and we really come to expect a lot of feedback and real innovation. I mean yesterday when we had the hackathon. The center, the epicenter of the discussion was Calm, was how do you automate on multiple clouds without having to write a single line of code? So we've come a long way since the acquisition of Calm two years ago. I think it's going to be a strong pillar in our overall product portfolio itself. Now the word multi-cloud is going to be used and over used. In fact, it's going to be blurring its lines with the idea of hyperconvergence of clouds, you know what does it mean. We just hope that hyperconvergence, the way it's called today will morph to become hyperconverged clouds not just hyperconverged boxes which is a software defined infrastructure definition itself. But let's focus on the why of multi-cloud. Why do we think it can't all go into a public cloud itself? The one big reason is just laws of the land. There's data sovereignty and computing sovereignty, regulations and compliance because of which you need to be in where the government with the regulations where the compliance rules want you to be. And by the way, that's just one reason why the cloud will have to disperse itself. It can't just be 10, 20 large data centers around the world itself because you have 200 plus countries and half of computing actually gets done outside the US itself. So it's a really important, very relevant point about the why of multi-cloud. The second one is just simple laws of physics. You know if there're machines at the edge, and they're producing so much data, you can't bring all the data to the compute. You have to take the compute which is stateless, it's an app. You take the app to where the data is because the network is the enemy. The network has always been the enemy. And when we thought we've made fatter networks, you've just produced more data as well. So this just goes without saying that you take something that's stateless that's without gravity, that's lightweight which is compute and the application and push it close to where the data itself is. And the third one which is related is just latency reasons you know? And it's not just about machine latency and electrons transferring over the speed light, and you can't defy the speed of light. It's also about human latency. It's also about multiple teams saying we need to federate and delegate, and we need to push things down to where the teams are as opposed to having to expect everybody to come to a very large computing power itself. So all the ways, the way they are, there will be at least three different ways of looking at multi-cloud itself. There's a centralized core cloud. We all go and relate to this because we've seen large data centers and so on. And that's the back office workhorse. It will crunch numbers. It will do processing. It will do a ton of things that will go and produce results for you know how we run our businesses, but there's also the dispersal of the cloud, so ROBO cloud. And this is the front office server that's really serving. It's a cloud that's going to serve people. It's going to be closer to people, and that's what a ROBO cloud is. We have a ton of customers out here who actually use Nutanix and the ROBO environments themselves as one node, two node, three node, five node servers, and it just collapses the entire server closet room in these ROBOs into something really, really small and minuscule. And finally, there's going to be another dispersed edge cloud because that's where the machines are, that's where the data is. And there's going to be an IOT machine fog because we need to miniaturize computing to something even smaller, maybe something that can really land in the palm in a mini server which is a PC like server, but you need to run everything that's enterprise grade. You should be able to go and upgrade them and monitor them and analyze them. You know do enough computing up there, maybe event-based processing that can actually happen. In fact, there's some great innovation that we've done at the edge with IOTs that I'd love for all of you to actually attend some sessions around as well. So with that being said, we have a hole in the stack. And that hole is probably one of the hardest problems that we've been trying to solve for the last two years. And Sunil will talk a lot about that. This idea of hybrid. The hybrid of multi-cloud is one of the hardest problems. Why? Because we're talking about really blurring the lines with owning and renting where you have a single-tenant environment which is your data center, and a multi-tenant environment which is the service providers data center, and the two must look like the same. And the two must look like the same is that hard a problem not just for burst out capacity, not just for security, not just for identity but also for networks. Like how do you blur the lines between networks? How do you blur the lines for storage? How do you really blur the lines for a single pane of glass where you can think of availability zones that look highly symmetric even though they're not because one of 'em is owned by you, and it's single-tenant. The other one is not owned by you, that's multi-tenant itself. So there's some really hard problems in hybrid that you'll hear Sunil talk about and the team. And some great strides that we've actually made in the last 12 months of really working on Xi itself. And that completes the picture now in terms of how we believe the state of computing will be going forward. So what are the must haves of a multi-cloud operating system? We talked about marketplace which is catalogs and automation. There's a ton of orchestration that needs to be done for multi-cloud to come together because now you have a self-service portal which is providing an eCommerce view. It's really about you know getting to do a lot of requests and workflows without having people come in the way, without even having tickets. There's no need for tickets if you can really start to think like a self-service portal as if you're just transacting eCommerce with machines and portals themselves. Obviously the next one is networking security. You need to blur the lines between on-prem and off-prem itself. These two play a huge role. And there's going to be a ton of details that you'll see Sunil talk about. But finally, what I want to focus on the rest of the talk itself here is what governance and compliance. This is a hard problem, and it's a hard problem because things have evolved. So I'm going to take a step back. Last 30 years of computing, how have consumption models changed? So think about it. 30 years ago, we were making decisions for 10 plus years, you know? Mainframe, at least 10 years, probably 20 plus years worth of decisions. These were decisions that were extremely waterfall-ish. Make 10s of millions of dollars worth of investment for a device that we'd buy for at least 10 to 20 years. Now as we moved to client-server, that thing actually shrunk. Now you're talking about five years worth of decisions, and these things were smaller. So there's a little bit more velocity in our decisions. We were not making as waterfall-ish decision as we used to with mainframes. But still five years, talk about virtualized, three tier, maybe three to five year decisions. You know they're still relatively big decisions that we were making with computer and storage and SAN fabrics and virtualization software and systems management software and so on. And here comes Nutanix, and we said no, no. We need to make it smaller. It has to become smaller because you know we need to make more agile decisions. We need to add machines every week, every month as opposed to adding you know machines every three to five years. And we need to be able to upgrade them, you know any point in time. You can do the upgrades every month if you had to, every week if you had to and so on. So really about more agility. And yet, we were not complete because there's another evolution going on, off-prem in the public cloud where people are going and doing reserved instances. But more than that, they were doing on demand stuff which no the decision was days to weeks. Some of these things that unitive compute was being rented for days to weeks, not years. And if you needed something more, you'd shift a little to the left and use reserved instances. And then spot pricing, you could do spot pricing for hours and finally lambda functions. Now you could to function as a service where things could actually be running only for minutes not even hours. So as you can see, there's a wide spectrum where when you move to the right, you get more elasticity, and when you move to the left, you're talking about predictable decision making. And in fact, it goes from minutes on one side to 10s of years on the other itself. And we hope to actually go and blur the lines between where NTNX is today where you see Nutanix right now to where we really want to be with reserved instances and on demand. And that's the real ask of Nutanix. How do you take care of this discontinuity? Because when you're owning things, you actually end up here, and when you're renting things, you end up here. What does it mean to really blur the lines between these two because people do want to make decisions that are better than reserved instance in the public cloud. We'll talk about why reserved instances which looks like a proxy for Nutanix it's still very, very wasteful even though you might think it's delightful, it's very, very wasteful. So what does it mean for on-prem and off-prem? You know you talk about cost governance, there's security compliance. These high velocity decisions we're actually making you know where sometimes you could be right with cost but wrong on security, but sometimes you could be right in security but wrong on cost. We need to really figure out how machines make some of these decisions for us, how software helps us decide do we have the right balance between cost, governance, and security compliance itself? And to get it right, we have introduced our first SAS service called Beam. And to talk more about Beam, I want to introduce Vijay Rayapati who's the general manager of Beam engineering to come up on stage and talk about Beam itself. Thank you Vijay. (rock music) So you've been here a couple of months now? >> Yes. >> At the same time, you spent the last seven, eight years really handling AWS. Tell us more about it. >> Yeah so we spent a lot of time trying to understand the last five years at Minjar you know how customers are really consuming in this new world for their workloads. So essentially what we tried to do is understand the consumption models, workload patterns, and also build algorithms and apply intelligence to say how can we lower this cost and you know improve compliance of their workloads.? And now with Nutanix what we're trying to do is how can we converge this consumption, right? Because what happens here is most customers start with on demand kind of consumption thinking it's really easy, but the total cost of ownership is so high as the workload elasticity increases, people go towards spot or a scaling, but then you need a lot more automation that something like Calm can help them. But predictability of the workload increases, then you need to move towards reserved instances, right to lower costs. >> And those are some of the things that you go and advise with some of the software that you folks have actually written. >> But there's a lot of waste even in the reserved instances because what happens it while customers make these commitments for a year or three years, what we see across, like we track a billion dollars in public cloud consumption you know as a Beam, and customers use 20%, 25% of utilization of their commitments, right? So how can you really apply, take the data of consumption you know apply intelligence to essentially reduce their you know overall cost of ownership. >> You said something that's very telling. You said reserved instances even though they're supposed to save are still only 20%, 25% utilized. >> Yes, because the workloads are very dynamic. And the next thing is you can't do hot add CPU or hot add memory because you're buying them for peak capacity. There is no convergence of scaling that apart from the scaling as another node. >> So you actually sized it for peak, but then using 20%, 30%, you're still paying for the peak. >> That's right. >> Dheeraj: That can actually add up. >> That's what we're trying to say. How can we deliver visibility across clouds? You know how can we deliver optimization across clouds and consumption models and bring the control while retaining that agility and demand elasticity? >> That's great. So you want to show us something? >> Yeah absolutely. So this is Beam as just Dheeraj outlined, our first SAS service. And this is my first .Next. And you know glad to be here. So what you see here is a global consumption you know for a business across different clouds. Whether that's in a public cloud like Amazon, or Azure, or Nutanix. We kind of bring the consumption together for the month, the recent month across your accounts and services and apply intelligence to say you know what is your spent efficiency across these clouds? Essentially there's a lot of intelligence that goes in to detect your workloads and consumption model to say if you're spending $100, how efficiently are you spending? How can you increase that? >> So you have a centralized view where you're looking at multiple clouds, and you know you talk about maybe you can take an example of an account and start looking at it? >> Yes, let's go into a cloud provider like you know for this business, let's go and take a loot at what's happening inside an Amazon cloud. Here we get into the deeper details of what's happening with the consumption of a specific services as well as the utilization of both on demand and RI. You know what can you do to lower your cost and detect your spend efficiency of a dollar to see you know are there resources that are provisioned by teams for applications that are not being used, or are there resources that we should go and rightsize because you know we have all this monitoring data, configuration data that we crunch through to basically detect this? >> You think there's billions of events that you look at everyday. You're already looking at a billon dollars worth of AWS spend. >> Right, right. >> So billions of events, billing, metering events every year to really figure out and optimize for them. >> So what we have here is a very popular international government organization. >> Dheeraj: Wow, so it looks like Russians are everywhere, the cloud is everywhere actually. >> Yes, it's quite popular. So when you bring your master account into Beam, we kind of detect all the linked accounts you know under that. Then you can go and take a look at not just at the organization level within it an account level. >> So these are child objects, you know. >> That's right. >> You can think of them as ephemeral accounts that you create because you don't want to be on the record when you're doing spams on Facebook for example. >> Right, let's go and take a look at what's happening inside a Facebook ad spend account. So we have you know consumption of the services. Let's go deeper into compute consumption, and you kind of see a trendline. You can do a lot of computing. As you see, looks like one campaign has ended. They started another campaign. >> Dheeraj: It looks like they're not stopping yet, man. There's a lot of money being made in Facebook right now. (Vijay laughing) >> So not only just get visibility at you know compute as a service inside a cloud provider, you can go deeper inside compute and say you know what is a service that I'm really consuming inside compute along with the CPUs n'stuff, right? What is my data transfer? You know what is my network? What is my load blancers? So essentially you get a very deeper visibility you know as a service right. Because we have three goals for Beam. How can we deliver visibility across clouds? How can we deliver visibility across services? And how can we deliver, then optimization? >> Well I think one thing that I just want to point out is how this SAS application was an extremely teachable moment for me to learn about the different resources that people could use about the public cloud. So all of you who actually have not gone deep enough into the idea of public cloud. This could be a great app for you to learn about things, the resources, you know things that you could do to save and security and things of that nature. >> Yeah. And we really believe in creating the single pane view you know to mange your optimization of a public cloud. You know as Ben spoke about as a business, you need to have freedom to use any cloud. And that's what Beam delivers. How can you make the right decision for the right workload to use any of the cloud of your choice? >> Dheeraj: How 'about databases? You talked about compute as well but are there other things we could look at? >> Vijay: Yes, let's go and take a look at database consumption. What you see here is they're using inside Facebook ad spending, they're using all databases except Oracle. >> Dheeraj: Wow, looks like Oracle sales folks have been active in Russia as well. (Vijay laughing) >> So what we're seeing here is a global view of you know what is your spend efficiency and which is kind of a scorecard for your business for the dollars that you're spending. And the great thing is Beam kind of brings together you know through its intelligence and algorithms to detect you know how can you rightsize resources and how can you eliminate things that you're not using? And we deliver and one click fix, right? Let's go and take a look at resources that are maybe provisioned for storage and not being used. We deliver the seamless one-click philosophy that Nutanix has to eliminate it. >> So one click, you can actually just pick some of these wasteful things that might be looking delightful because using public cloud, using credit cards, you can go in and just say click fix, and it takes care of things. >> Yeah, and not only remove the resources that are unused, but it can go and rightsize resources across your compute databases, load balancers, even past services, right? And this is where the power of it kind of comes for a business whether you're using on-prem and off-prem. You know how can you really converge that consumption across both? >> Dheeraj: So do you have something for Nutanix too? >> Vijay: Yes, so we have basically been working on Nutanix with something that we're going to deliver you know later this year. As you can see here, we're bringing together the consumption for the Nutanix, you know the services that you're using, the licensing and capacity that is available. And how can you also go and optimize within Nutanix environments >> That's great. >> for the next workload. Now let me quickly show you what we have on the compliance side. This is an extremely powerful thing that we've been working on for many years. What we deliver here just like in cost governance, a global view of your compliance across cloud providers. And the most powerful thing is you can go into a cloud provider, get the next level of visibility across cloud regimes for hundreds of policies. Not just policies but those policies across different regulatory compliances like HIPA, PCI, CAS. And that's very powerful because-- >> So you're saying a lot of what you folks have done is codified these compliance checks in software to make sure that people can sleep better at night knowing that it's PCI, and HIPA, and all that compliance actually comes together? >> And you can build this not just by cloud accounts, you can build them across cloud accounts which is what we call security centers. Essentially you can go and take a deeper look at you know the things. We do a whole full body scan for your cloud infrastructure whether it's AWS Amazon or Azure, and you can go and now, again, click to fix things. You know that had been probably provisioned that are violating the security compliance rules that should be there. Again, we have the same one-click philosophy to say how can you really remove things. >> So again, similar to save, you're saying you can go and fix some of these security issues by just doing one click. >> Absolutely. So the idea is how can we give our people the freedom to get visibility and use the right cloud and take the decisions instantly through one click. That's what Beam delivers you know today. And you know get really excited, and it's available at beam.nutanix.com. >> Our first SAS service, ladies and gentleman. Thank you so much for doing this, Vijay. It looks like there's going to be a talk here at 10:30. You'll talk more about the midterm elections there probably? >> Yes, so you can go and write your own security compliances as well. You know within Beam, and a lot of powerful things you can do. >> Awesome, thank you so much, Vijay. I really appreciate it. (audience clapping) So as you see, there's a lot of work that we're doing to really make multi-cloud which is a hard problem. You know think about working the whole body of it and what about cost governance? What about security compliance? Obviously what about hybrid networks, and security, and storage, you know compute, many of the things that you've actually heard from us, but we're taking it to a level where the business users can now understand the implications. A CFO's office can understand the implications of waste and delight. So what does customer success mean to us? You know again, my favorite word in a long, long time is really go and figure out how do you make you, the customer, become operationally efficient. You know there's a lot of stuff that we deliver through software that's completely uncovered. It's so latent, you don't even know you have it, but you've paid for it. So you've got to figure out what does it mean for you to really become operationally efficient, organizationally proficient. And it's really important for training, education, stuff that you know you're people might think it's so awkward to do in Nutanix, but it could've been way simpler if you just told you a place where you can go and read about it. Of course, I can just use one click here as opposed to doing things the old way. But most importantly to make it financially accountable. So the end in all this is, again, one of the things that I think about all the time in building this company because obviously there's a lot of stuff that we want to do to create orphans, you know things above the line and top line and everything else. There's also a bottom line. Delight and waste are two sides of the same coin. You know when we're talking about developers who seek delight with public cloud at the same time you're looking at IT folks who're trying to figure out governance. They're like look you know the CFOs office, the CIOs office, they're trying to figure out how to curb waste. These two things have to go hand in hand in this era of multi-cloud where we're talking about frictionless consumption but also governance that looks invisible. So I think, at the end of the day, this company will do a lot of stuff around one-click delight but also go and figure out how do you reduce waste because there's so much waste including folks there who actually own Nutanix. There's so much software entitlement. There's so much waste in the public cloud itself that if we don't go and put our arms around, it will not lead to customer success. So to talk more about this, the idea of delight and the idea of waste, I'd like to bring on board a person who I think you know many of you actually have talked about it have delightful hair but probably wasted jokes. But I think has wasted hair and delightful jokes. So ladies and gentlemen, you make the call. You're the jury. Sunil R.M.J. Potti. ("Free" by Broods) >> So that was the first time I came out from the bottom of a screen on a stage. I actually now know what it feels to be like a gopher. Who's that laughing loudly at the back? Okay, do we have the... Let's see. Okay, great. We're about 15 minutes late, so that means we're running right on time. That's normally how we roll at this conference. And we have about three customers and four demos. Like I think there's about three plus six, about nine folks coming onstage. So we'll have our own version of the parade as well on the main stage for the next 70 minutes. So let's just jump right into it. I think we've been pretty consistent in terms of our longterm plans since we started the company. And it's become a lot more clearer over the last few years about our plans to essentially make computing invisible as Dheeraj mentioned. We're doing this across multiple acts. We started with HCI. We call it making infrastructure invisible. We extended that to making data centers invisible. And then now we're in this mode of essentially extending it to converging clouds so that you can actually converge your consumption models. And so today's conference and essentially the theme that you're going to be seeing throughout the breakout sessions is about a journey towards invisible clouds, but make sure that you internalize the fact that we're investing heavily in each of the three phases. It's just not about the hybrid cloud with Nutanix, it's about actually finishing the job about making infrastructure invisible, expanding that to kind of go after the full data center, and then of course embark on some real meaningful things around invisible clouds, okay? And to start the session, I think you know the part that I wanted to make sure that we are all on the same page because most of us in the room are still probably in this phase of the journey which is about invisible infrastructure. And there the three key products and especially two of them that most of you guys know are Acropolis and Prism. And they're sort of like the bedrock of our company. You know especially Acropolis which is about the web scale architecture. Prism is about consumer grade design. And with Acropolis now being really mature. It's in the seventh year of innovation. We still have more than half of our company in terms of R and D spend still on Acropolis and Prism. So our core product is still sort of where we think we have a significant differentiation on. We're not going to let our foot off the peddle there. You know every time somebody comes to me and says look there's a new HCI render popping out or an existing HCI render out there, I ask a simple question to our customers saying show me 100 customers with 100 node deployments, and it will be very hard to find any other render out there that does the same thing. And that's the power of Acropolis the code platform. And then it's you know the fact that the velocity associated with Acropolis continues to be on a fast pace. We came out with various new capabilities in 5.5 and 5.6, and one of the most complicated things to get right was the fact to shrink our three node cluster to a one node, two node deployment. Most of you actually had requirements on remote office, branch office, or the edge that actually allowed us to kind of give us you know sort of like the impetus to kind of go design some new capabilities into our core OS to get this out. And associated with Acropolis and expanding into Prism, as you will see, the first couple of years of Prism was all about refactoring the user interface, doing a good job with automation. But more and more of the investments around Prism is going to be based on machine learning. And you've seen some variants of that over the last 12 months, and I can tell you that in the next 12 to 24 months, most of our investments around infrastructure operations are going to be driven by AI techniques starting with most of our R and D spend also going into machine-learning algorithms. So when you talk about all the enhancements that have come on with Prism whether it be formed by you know the management console changing to become much more automated, whether now we give you automatic rightsizing, anomaly detection, or a series of functionality that have gone into it, the real core sort of capabilities that we're putting into Prism and Acropolis are probably best served by looking at the quality of the product. You probably have seen this slide before. We started showing the number of nodes shipped by Nutanix two years ago at this conference. It was about 35,000 plus nodes at that time. And since then, obviously we've you know continued to grow. And we would draw this line which was about enterprise class quality. That for the number of bugs found as a percentage of nodes shipped, there's a certain line that's drawn. World class companies do about probably 2% to 3%, number of CFDs per node shipped. And we were just broken that number two years ago. And to give you guys an idea of how that curve has shown up, it's now currently at .95%. And so along with velocity, you know this focus on being true to our roots of reliability and stability continues to be, you know it's an internal challenge, but it's also some of the things that we keep a real focus on. And so between Acropolis and Prism, that's sort of like our core focus areas to sort of give us the confidence that look we have this really high bar that we're sort of keeping ourselves accountable to which is about being the most advanced enterprise cloud OS on the planet. And we will keep it this way for the next 10 years. And to complement that, over a period of time of course, we've added a series of services. So these are services not just for VMs but also for files, blocks, containers, but all being delivered in that single one-click operations fashion. And to really talk more about it, and actually probably to show you the real deal there it's my great pleasure to call our own version of Moses inside the company, most of you guys know him as Steve Poitras. Come on up, Steve. (audience clapping) (rock music) >> Thanks Sunil. >> You barely fit in that door, man. Okay, so what are we going to talk about today, Steve? >> Absolutely. So when we think about when Nutanix first got started, it was really focused around VDI deployments, smaller workloads. However over time as we've evolved the product, added additional capabilities and features, that's grown from VDI to business critical applications as well as cloud native apps. So let's go ahead and take a look. >> Sunil: And we'll start with like Oracle? >> Yeah, that's one of the key ones. So here we can see our Prism central user interface, and we can see our Thor cluster obviously speaking to the Avengers theme here. We can see this is doing right around 400,000 IOPs at around 360 microseconds latency. Now obviously Prism central allows you to mange all of your Nutanix deployments, but this is just running on one single Nutanix cluster. So if we hop over here to our explore tab, we can see we have a few categories. We have some Kubernetes, some AFS, some Xen desktop as well as Oracle RAC. Now if we hope over to Oracle RAC, we're running a SLOB workload here. So obviously with Oracle enterprise applications performance, consistency, and extremely low latency are very critical. So with this SLOB workload, we're running right around 300 microseconds of latency. >> Sunil: So this is what, how many node Oracle RAC cluster is this? >> Steve: This is a six node Oracle RAC deployment. >> Sunil: Got it. And so what has gone into the product in recent releases to kind of make this happen? >> Yeah so obviously on the hardware front, there's been a lot of evolutions in storage mediums. So with the introduction of NVME, persistent memory technologies like 3D XPoint, that's meant storage media has become a lot faster. Now to allow you to full take advantage of that, that's where we've had to do a lot of optimizations within the storage stack. So with AHV, we have what we call AHV turbo mode which allows you to full take advantage of those faster storage mediums at that much lower latency. And then obviously on the networking front, technologies such as RDMA can be leveraged to optimize that network stack. >> Got it. So that was Oracle RAC running on a you know Nutanix cluster. It used to be a big deal a couple of years ago. Now we've got many customers doing that. On the same environment though, we're going to show you is the advent of actually putting file services in the same scale out environment. And you know many of you in the audience probably know about AFS. We released it about 12 to 14 months ago. It's been one of our most popular new products of all time within Nutanix's history. And we had SMB support was for user file shares, VDI deployments, and it took awhile to bake, to get to scale and reliability. And then in the last release, in the recent release that we just shipped, we now added NFS for support so that we can no go after the full scale file server consolidation. So let's take a look at some of that stuff. >> Yep, let's do it. So hopping back over to Prism, we can see our four cluster here. Overall cluster-wide latency right around 360 microseconds. Now we'll hop down to our file server section. So here we can see we have our Next A File Server hosting right about 16.2 million files. Now if you look at our shares and exports, we can see we have a mix of different shares. So one of the shares that you see there is home directories. This is an SMB share which is actually mapped and being leveraged by our VDI desktops for home folders, user profiles, things of that nature. We can also see this Oracle backup share here which is exposed to our rack host via NFS. So RMAN is actually leveraging this to provide native database backups. >> Got it. So Oracle VMs, backup using files, or for any other file share requirements with AFS. Do we have the cluster also showing, I know, so I saw some Kubernetes as well on it. Let's talk about what we're thinking of doing there. >> Yep, let's do it. So if we think about cloud, cloud's obviously a big buzz word, so is containers in Kubernetes. So with ACS 1.0 what we did is we introduced native support for Docker integration. >> And pause there. And we screwed up. (laughing) So just like the market took a left turn on Kubernetes, obviously we realized that, and now we're working on ACS 2.0 which is what we're going to talk about, right? >> Exactly. So with ACS 2.0, we've introduced native Kubernetes support. Now when I think about Kubernetes, there's really two core areas that come to mind. The first one is around native integration. So with that, we have our Kubernetes volume integration, we're obviously doing a lot of work on the networking front, and we'll continue to push there from an integration point of view. Now the other piece is around the actual deployment of Kubernetes. When we think about a lot of Nutanix administrators or IT admins, they may have never deployed Kubernetes before, so this could be a very daunting task. And true to the Nutanix nature, we not only want to make our platform simple and intuitive, we also want to do this for any ecosystem products. So with ACS 2.0, we've simplified the full Kubernetes deployment and switching over to our ACS two interface, we can see this create cluster button. Now this actually pops up a full wizard. This wizard will actually walk you through the full deployment process, gather the necessary inputs for you, and in a matter of a few clicks and a few minutes, we have a full Kubernetes deployment fully provisioned, the masters, the workers, all the networking fully done for you, very simple and intuitive. Now if we hop back over to Prism, we can see we have this ACS2 Kubernetes category. Clicking on that, we can see we have eight instances of virtual machines. And here are Kubernetes virtual machines which have actually been deployed as part of this ACS2 installer. Now one of the nice things is it makes the IT administrator's job very simple and easy to do. The deployment straightforward monitoring and management very straightforward and simple. Now for the developer, the application architect, or engineers, they interface and interact with Kubernetes just like they would traditionally on any platform. >> Got it. So the goal of ACS is to ensure that the developer ecosystem still uses whatever tools that they are you know preferring while at that same time allowing this consolidation of containers along with VMs all on that same, single runtime, right? So that's ACS. And then if you think about where the OS is going, there's still some open space at the end. And open space has always been look if you just look at a public cloud, you look at blocks, files, containers, the most obvious sort of storage function that's left is objects. And that's the last horizon for us in completing the storage stack. And we're going to show you for the first time a preview of an upcoming product called the Acropolis Object Storage Services Stack. So let's talk a little bit about it and then maybe show the demo. >> Yeah, so just like we provided file services with AFS, block services with ABS, with OSS or Object Storage Services, we provide native object storage, compatibility and capability within the Nutanix platform. Now this provides a very simply common S3 API. So any integrations you've done with S3 especially Kubernetes, you can actually leverage that out of the box when you've deployed this. Now if we hop back over to Prism, I'll go here to my object stores menu. And here we can see we have two existing object storage instances which are running. So you can deploy however many of these as you wanted to. Now just like the Kubernetes deployment, deploying a new object instance is very simple and easy to do. So here I'll actually name this instance Thor's Hammer. >> You do know he loses it, right? He hasn't seen the movies yet. >> Yeah, I don't want any spoilers yet. So once we specified the name, we can choose our capacity. So here we'll just specify a large instance or type. Obviously this could be any amount or storage. So if you have a 200 node Nutanix cluster with petabytes worth of data, you could do that as well. Once we've selected that, we'll select our expected performance. And this is going to be the number of concurrent gets and puts. So essentially how many operations per second we want this instance to be able to facilitate. Once we've done that, the platform will actually automatically determine how many virtual machines it needs to deploy as well as the resources and specs for those. And once we've done that, we'll go ahead and click save. Now here we can see it's actually going through doing the deployment of the virtual machines, applying any necessary configuration, and in the matter of a few clicks and a few seconds, we actually have this Thor's Hammer object storage instance which is up and running. Now if we hop over to one of our existing object storage instances, we can see this has three buckets. So one for Kafka-queue, I'm actually using this for my Kafka cluster where I have right around 62 million objects all storing ProtoBus. The second one there is Spark. So I actually have a Spark cluster running on our Kubernetes deployed instance via ACS 2.0. Now this is doing analytics on top of this data using S3 as a storage backend. Now for these objects, we support native versioning, native object encryption as well as worm compliancy. So if you want to have expiry periods, retention intervals, that sort of thing, we can do all that. >> Got it. So essentially what we've just shown you is with upcoming objects as well that the same OS can now support VMs, files, objects, containers, all on the same one click operational fabric. And so that's in some way the real power of Nutanix is to still keep that consistency, scalability in place as we're covering each and every workload inside the enterprise. So before Steve gets off stage though, I wanted to talk to you guys a little bit about something that you know how many of you been to our Nutanix headquarters in San Jose, California? A few. I know there's like, I don't know, 4,000 or 5,000 people here. If you do come to the office, you know when you land in San Jose Airport on the way to longterm parking, you'll pass our office. It's that close. And if you come to the fourth floor, you know one of the cubes that's where I sit. In the cube beside me is Steve. Steve sits in the cube beside me. And when I first joined the company, three or four years ago, and Steve's if you go to his cube, it no longer looks like this, but it used to have a lot of this stuff. It was like big containers of this. I remember the first time. Since I started joking about it, he started reducing it. And then Steve eventually got married much to our surprise. (audience laughing) Much to his wife's surprise. And then he also had a baby as a bigger surprise. And if you come over to our office, and we welcome you, and you come to the fourth floor, find my cube or you'll find Steve's Cube, it now looks like this. Okay, so thanks a lot, my man. >> Cool, thank you. >> Thanks so much. (audience clapping) >> So single OS, any workload. And like Steve who's been with us for awhile, it's my great pleasure to invite one of our favorite customers, CSC Karen who's also been with us for three to four years. And I'll share some fond memories about how she's been with the company for awhile, how as partners we've really done a lot together. So without any further ado, let me bring up Karen. Come on up, Karen. (rock music) >> Thank you for having me. >> Yeah, thank you. So I remember, so how many of you guys were with Nutanix first .Next in Miami? I know there was a question like that asked last time. Not too many. You missed it. We wished we could go back to that. We wouldn't fit 3/4s of this crowd. But Karen was our first customer in the keynote in 2015. And we had just talked about that story at that time where you're just become a customer. Do you want to give us some recap of that? >> Sure. So when we made the decision to move to hyperconverged infrastructure and chose Nutanix as our partner, we rapidly started to deploy. And what I mean by that is Sunil and some of the Nutanix executives had come out to visit with us and talk about their product on a Tuesday. And on a Wednesday after making the decision, I picked up the phone and said you know what I've got to deploy for my VDI cluster. So four nodes showed up on Thursday. And from the time it was plugged in to moving over 300 VDIs and 50 terabytes of storage and turning it over for the business for use was less than three days. So it was really excellent testament to how simple it is to start, and deploy, and utilize the Nutanix infrastructure. Now part of that was the delight that we experienced from our customers after that deployment. So we got phone calls where people were saying this report it used to take so long that I'd got out and get a cup of coffee and come back, and read an article, and do some email, and then finally it would finish. Those reports are running in milliseconds now. It's one click. It's very, very simple, and we've delighted our customers. Now across that journey, we have gone from the simple workloads like VDIs to the much more complex workloads around Splunk and Hadoop. And what's really interesting about our Splunk deployment is we're handling over a billion events being logged everyday. And the deployment is smaller than what we had with a three tiered infrastructure. So when you hear people talk about waste and getting that out and getting to an invisible environment where you're just able to run it, that's what we were able to achieve both with everything that we're running from our public facing websites to the back office operations that we're using which include Splunk and even most recently our Cloudera and Hadoop infrastructure. What it does is it's got 30 crawlers that go out on the internet and start bringing data back. So it comes back with over two terabytes of data everyday. And then that environment, ingests that data, does work against it, and responds to the business. And that again is something that's smaller than what we had on traditional infrastructure, and it's faster and more stable. >> Got it. And it covers a lot of use cases as well. You want to speak a few words on that? >> So the use cases, we're 90%, 95% deployed on Nutanix, and we're covering all of our use cases. So whether that's a customer facing app or a back office application. And what are business is doing is it's handling large portfolios of data for fortune 500 companies and law firms. And these applications are all running with improved stability, reliability, and performance on the Nutanix infrastructure. >> And the plan going forward? >> So the plan going forward, you actually asked me that in Miami, and it's go global. So when we started in Miami and that first deployment, we had four nodes. We now have 283 nodes around the world, and we started with about 50 terabytes of data. We've now got 3.8 petabytes of data. And we're deployed across four data centers and six remote offices. And people ask me often what is the value that we achieved? So simplification. It's all just easier, and it's all less expensive. Being able to scale with the business. So our Cloudera environment ended up with one day where it spiked to 1,000 times more load, 1,000 times, and it just responded. We had rally cries around improved productivity by six times. So 600% improved productivity, and we were able to actually achieve that. The numbers you just saw on the slide that was very, very fast was we calculated a 40% reduction in total cost of ownership. We've exceeded that. And when we talk about waste, that other number on the board there is when I saved the company one hour of maintenance activity or unplanned downtime in a month which we're now able to do the majority of our maintenance activities without disrupting any of our business solutions, I'm saving $750,000 each time I save that one hour. >> Wow. All right, Karen from CSE. Thank you so much. That was great. Thank you. I mean you know some of these data points frankly as I started talking to Karen as well as some other customers are pretty amazing in terms of the genuine value beyond financial value. Kind of like the emotional sort of benefits that good products deliver to some of our customers. And I think that's one of the core things that we take back into engineering is to keep ourselves honest on either velocity or quality even hiring people and so forth. Is to actually the more we touch customers lives, the more we touch our partner's lives, the more it allows us to ensure that we can put ourselves in their shoes to kind of make sure that we're doing the right thing in terms of the product. So that was the first part, invisible infrastructure. And our goal, as we've always talked about, our true North is to make sure that this single OS can be an exact replica, a truly modern, thoughtful but original design that brings the power of public cloud this AWS or GCP like architectures into your mainstream enterprises. And so when we take that to the next level which is about expanding the scope to go beyond invisible infrastructure to invisible data centers, it starts with a few things. Obviously, it starts with virtualization and a level of intelligent management, extends to automation, and then as we'll talk about, we have to embark on encompassing the network. And that's what we'll talk about with Flow. But to start this, let me again go back to one of our core products which is the bedrock of our you know opinionated design inside this company which is Prism and Acropolis. And Prism provides, I mentioned, comes with a ton of machine-learning based intelligence built into the product in 5.6 we've done a ton of work. In fact, a lot of features are coming out now because now that PC, Prism Central that you know has been decoupled from our mainstream release strain and will continue to release on its own cadence. And the same thing when you actually flip it to AHV on its own train. Now AHV, two years ago it was all about can I use AHV for VDI? Can I use AHV for ROBO? Now I'm pretty clear about where you cannot use AHV. If you need memory overcome it, stay with VMware or something. If you need, you know Metro, stay with another technology, else it's game on, right? And if you really look at the adoption of AHV in the mainstream enterprise, the customers now speak for themselves. These are all examples of large global enterprises with multimillion dollar ELAs in play that have now been switched over. Like I'll give you a simple example here, and there's lots of these that I'm sure many of you who are in the audience that are in this camp, but when you look at the breakout sessions in the pods, you'll get a sense of this. But I'll give you one simple example. If you look at the online payment company. I'm pretty sure everybody's used this at one time or the other. They had the world's largest private cloud on open stack, 21,000 nodes. And they were actually public about it three or four years ago. And in the last year and a half, they put us through a rigorous VOC testing scale, hardening, and it's a full blown AHV only stack. And they've started cutting over. Obviously they're not there yet completely, but they're now literally in hundreds of nodes of deployment of Nutanix with AHV as their primary operating system. So it is primetime from a deployment perspective. And with that as the base, no cloud is complete without actually having self-service provisioning that truly drives one-click automation, and can you do that in this consumer grade design? And Calm was acquired, as you guys know, in 2016. We had a choice of taking Calm. It was reasonably feature complete. It supported multiple clouds. It supported ESX, it supported Brownfield, It supported AHV. I mean they'd already done the integration with Nutanix even before the acquisition. And we had a choice. The choice was go down the path of dynamic ops or some other products where you took it for revenue or for acceleration, you plopped it into the ecosystem and sold it at this power sucking alien on top of our stack, right? Or we took a step back, re-engineered the product, kept some of the core essence like the workflow engine which was good, the automation, the object model and all, but refactored it to make it look like a natural extension of our operating system. And that's what we did with Calm. And we just launched it in December, and it's been one of our most popular new products now that's flying off the shelves. If you saw the number of registrants, I got a notification of this for the breakout sessions, the number one session that has been preregistered with over 500 people, the first two sessions are around Calm. And justifiably so because it just as it lives up to its promise, and it'll take its time to kind of get to all the bells and whistles, all the capabilities that have come through with AHV or Acropolis in the past. But the feature functionality, the product market fit associated with Calm is dead on from what the feedback that we can receive. And so Calm itself is on its own rapid cadence. We had AWS and AHV in the first release. Three or four months later, we now added ESX support. We added GCP support and a whole bunch of other capabilities, and I think the essence of Calm is if you can combine Calm and along with private cloud automation but also extend it to multi-cloud automation, it really sets Nutanix on its first genuine path towards multi-cloud. But then, as I said, if you really fixate on a software defined data center message, we're not complete as a full blown AWS or GCP like IA stack until we do the last horizon of networking. And you probably heard me say this before. You heard Dheeraj and others talk about it before is our problem in networking isn't the same in storage. Because the data plane in networking works. Good L2 switches from Cisco, Arista, and so forth, but the real problem networking is in the control plane. When something goes wrong at a VM level in Nutanix, you're able to identify whether it's a storage problem or a compute problem, but we don't know whether it's a VLAN that's mis-configured, or there've been some packets dropped at the top of the rack. Well that all ends now with Flow. And with Flow, essentially what we've now done is take the work that we've been working on to create built-in visibility, put some network automation so that you can actually provision VLANs when you provision VMs. And then augment it with micro segmentation policies all built in this easy to use, consume fashion. But we didn't stop there because we've been talking about Flow, at least the capabilities, over the last year. We spent significant resources building it. But we realized that we needed an additional thing to augment its value because the world of applications especially discovering application topologies is a heady problem. And if we didn't address that, we wouldn't be fulfilling on this ambition of providing one-click network segmentation. And so that's where Netsil comes in. Netsil might seem on the surface yet another next generation application performance management tool. But the innovations that came from Netsil started off at the research project at the University of Pennsylvania. And in fact, most of the team right now that's at Nutanix is from the U Penn research group. And they took a really original, fresh look at how do you sit in a network in a scale out fashion but still reverse engineer the packets, the flow through you, and then recreate this application topology. And recreate this not just on Nutanix, but do it seamlessly across multiple clouds. And to talk about the power of Flow augmented with Netsil, let's bring Rajiv back on stage, Rajiv. >> How you doing? >> Okay so we're going to start with some Netsil stuff, right? >> Yeah, let's talk about Netsil and some of the amazing capabilities this acquisition's bringing to Nutanix. First of all as you mentioned, Netsil's completely non invasive. So it installs on the network, it does all its magic from there. There're no host agents, non of the complexity and compatibility issues that entails. It's also monitoring the network at layer seven. So it's actually doing a deep packet inspection on all your application data, and can give you insights into services and APIs which is very important for modern applications and the way they behave. To do all this of course performance is key. So Netsil's built around a completely distributed architecture scaled to really large workloads. Very exciting technology. We're going to use it in many different ways at Nutanix. And to give you a flavor of that, let me show you how we're thinking of integrating Flow and Nestil together, so micro segmentation and Netsil. So to do that, we install Netsil in one of our Google accounts. And that's what's up here now. It went out there. It discovered all the VMs we're running on that account. It created a map essentially of all their interactions, and you can see it's like a Google Maps view. I can zoom into it. I can look at various things running. I can see lots of HTTP servers over here, some databases. >> Sunil: And it also has stats, right? You can go, it actually-- >> It does. We can take a look at that for a second. There are some stats you can look at right away here. Things like transactions per second and latencies and so on. But if I wanted to micro segment this application, it's not really clear how to do so. There's no real pattern over here. Taking the Google Maps analogy a little further, this kind of looks like the backstreets of Cairo or something. So let's do this step by step. Let me first filter down to one application. Right now I'm looking at about three or four different applications. And Netsil integrates with the metadata. So this is that the clouds provide. So I can search all the tags that I have. So by doing that, I can zoom in on just the financial application. And when I do this, the view gets a little bit simpler, but there's still no real pattern. It's not clear how to micro segment this, right? And this is where the power of Netsil comes in. This is a fairly naive view. This is what tool operating at layer four just looking at ports and TCP traffic would give you. But by doing deep packet inspection, Netsil can get into the services layer. So instead of grouping these interactions by hostname, let's group them by service. So you go service tier. And now you can see this is a much simpler picture. Now I have some patterns. I have a couple of load balancers, an HA proxy and an Nginx. I have a web application front end. I have some application servers running authentication services, search services, et cetera, a database, and a database replica. I could go ahead and micro segment at this point. It's quite possible to do it at this point. But this is almost too granular a view. We actually don't usually want to micro segment at individual service level. You think more in terms of application tiers, the tiers that different services belong to. So let me go ahead and group this differently. Let me group this by app tier. And when I do that, a really simple picture emerges. I have a load balancing tier talking to a web application front end tier, an API tier, and a database tier. Four tiers in my application. And this is something I can work with. This is something that I can micro segment fairly easily. So let's switch over to-- >> Before we dot that though, do you guys see how he gave himself the pseudonym called Dom Toretto? >> Focus Sunil, focus. >> Yeah, for those guys, you know that's not the Avengers theme, man, that's the Fast and Furious theme. >> Rajiv: I think a year ahead. This is next years theme. >> Got it, okay. So before we cut over from Netsil to Flow, do we want to talk a few words about the power of Flow, and what's available in 5.6? >> Sure so Flow's been around since the 5.6 release. Actually some of the functionality came in before that. So it's got invisibility into the network. It helps you debug problems with WLANs and so on. We had a lot of orchestration with other third party vendors with load balancers, with switches to make publishing much simpler. And then of course with our most recent release, we GA'ed our micro segmentation capabilities. And that of course is the most important feature we have in Flow right now. And if you look at how Flow policy is set up, it looks very similar to what we just saw with Netsil. So we have load blancer talking to a web app, API, database. It's almost identical to what we saw just a moment ago. So while this policy was created manually, it is something that we can automate. And it is something that we will do in future releases. Right now, it's of course not been integrated at that level yet. So this was created manually. So one thing you'll notice over here is that the database tier doesn't get any direct traffic from the internet. All internet traffic goes to the load balancer, only specific services then talk to the database. So this policy right now is in monitoring mode. It's not actually being enforced. So let's see what happens if I try to attack the database, I start a hack against the database. And I have my trusty brute force password script over here. It's trying the most common passwords against the database. And if I happen to choose a dictionary word or left the default passwords on, eventually it will log into the database. And when I go back over here in Flow what happens is it actually detects there's now an ongoing a flow, a flow that's outside of policy that's shown up. And it shows this in yellow. So right alongside the policy, I can visualize all the noncompliant flows. This makes it really easy for me now to make decisions, does this flow should it be part of the policy, should it not? In this particular case, obviously it should not be part of the policy. So let me just switch from monitoring mode to enforcement mode. I'll apply the policy, give it a second to propagate. The flow goes away. And if I go back to my script, you can see now the socket's timing out. I can no longer connect to the database. >> Sunil: Got it. So that's like one click segmentation and play right now? >> Absolutely. It's really, really simple. You can compare it to other products in the space. You can't get simpler than this. >> Got it. Why don't we got back and talk a little bit more about, so that's Flow. It's shipping now in 5.6 obviously. It'll come integrated with Netsil functionality as well as a variety of other enhancements in that next few releases. But Netsil does more than just simple topology discovery, right? >> Absolutely. So Netsil's actually gathering a lot of metrics from your network, from your host, all this goes through a data pipeline. It gets processed over there and then gets captured in a time series database. And then we can slice and dice that in various different ways. It can be used for all kinds of insights. So let's see how our application's behaving. So let me say I want to go into the API layer over here. And I instantly get a variety of metrics on how the application's behaving. I get the most requested endpoints. I get the average latency. It looks reasonably good. I get the average latency of the slowest endpoints. If I was having a performance problem, I would know exactly where to go focus on. Right now, things look very good, so we won't focus on that. But scrolling back up, I notice that we have a fairly high error rate happening. We have like 11.35% of our HTTP requests are generating errors, and that deserves some attention. And if I scroll down again, and I see the top five status codes I'm getting, almost 10% of my requests are generating 500 errors, HTTP 500 errors which are internal server errors. So there's something going on that's wrong with this application. So let's dig a little bit deeper into that. Let me go into my analytics workbench over here. And what I've plotted over here is how my HTTP requests are behaving over time. Let me filter down to just the 500 ones. That will make it easier. And I want the 500s. And I'll also group this by the service tier so that I can see which services are causing the problem. And the better view for this would be a bar graph. Yes, so once I do this, you can see that all the errors, all the 500 errors that we're seeing have been caused by the authentication service. So something's obviously wrong with that part of my application. I can go look at whether Active Directory is misbehaving and so on. So very quickly from a broad problem that I was getting a high HTTP error rate. In fact, usually you will discover there's this customer complaining about a lot of errors happening in your application. You can quickly narrow down to exactly what the cause was. >> Got it. This is what we mean by hyperconvergence of the network which is if you can truly isolate network related problems and associate them with the rest of the hyperconvergence infrastructure, then we've essentially started making real progress towards the next level of hyperconvergence. Anyway, thanks a lot, man. Great job. >> Thanks, man. (audience clapping) >> So to talk about this evolution from invisible infrastructure to invisible data centers is another customer of ours that has embarked on this journey. And you know it's not just using Nutanix but a variety of other tools to actually fulfill sort of like the ambition of a full blown cloud stack within a financial organization. And to talk more about that, let me call Vijay onstage. Come on up, Vijay. (rock music) >> Hey. >> Thank you, sir. So Vijay looks way better in real life than in a picture by the way. >> Except a little bit of gray. >> Unlike me. So tell me a little bit about this cloud initiative. >> Yeah. So we've won the best cloud initiative twice now hosted by Incisive media a large magazine. It's basically they host a bunch of you know various buy side, sell side, and you can submit projects in various categories. So we've won the best cloud twice now, 2015 and 2017. The 2017 award is when you know as part of our private cloud journey we were laying the foundation for our private cloud which is 100% based on hyperconverged infrastructure. So that was that award. And then 2017, we've kind of built on that foundation and built more developer-centric next gen app services like PAS, CAS, SDN, SDS, CICD, et cetera. So we've built a lot of those services on, and the second award was really related to that. >> Got it. And a lot of this was obviously based on an infrastructure strategy with some guiding principles that you guys had about three or four years ago if I remember. >> Yeah, this is a great slide. I use it very often. At the core of our infrastructure strategy is how do we run IT as a business? I talk about this with my teams, they were very familiar with this. That's the mindset that I instill within the teams. The mission, the challenge is the same which is how do we scale infrastructure while reducing total cost of ownership, improving time to market, improving client experience and while we're doing that not lose sight of reliability, stability, and security? That's the mission. Those are some of our guiding principles. Whenever we take on some large technology investments, we take 'em through those lenses. Obviously Nutanix went through those lenses when we invested in you guys many, many years ago. And you guys checked all the boxes. And you know initiatives change year on year, the mission remains the same. And more recently, the last few years, we've been focused on converged platforms, converged teams. We've actually reorganized our teams and aligned them closer to the platforms moving closer to an SRE like concept. >> And then you've built out a full stack now across computer storage, networking, all the way with various use cases in play? >> Yeah, and we're aggressively moving towards PAS, CAS as our method of either developing brand new cloud native applications or even containerizing existing applications. So the stack you know obviously built on Nutanix, SDS for software fine storage, compute and networking we've got SDN turned on. We've got, again, PAS and CAS built on this platform. And then finally, we've hooked our CICD tooling onto this. And again, the big picture was always frictionless infrastructure which we're very close to now. You know 100% of our code deployments into this environment are automated. >> Got it. And so what's the net, net in terms of obviously the business takeaway here? >> Yeah so at Northern we don't do tech for tech. It has to be some business benefits, client benefits. There has to be some outcomes that we measure ourselves against, and these are some great metrics or great ways to look at if we're getting the outcomes from the investments we're making. So for example, infrastructure scale while reducing total cost of ownership. We're very focused on total cost of ownership. We, for example, there was a build team that was very focus on building servers, deploying applications. That team's gone down from I think 40, 45 people to about 15 people as one example, one metric. Another metric for reducing TCO is we've been able to absorb additional capacity without increasing operating expenses. So you're actually building capacity in scale within your operating model. So that's another example. Another example, right here you see on the screen. Faster time to market. We've got various types of applications at any given point that we're deploying. There's a next gen cloud native which go directly on PAS. But then a majority of the applications still need the traditional IS components. The time to market to deploy a complex multi environment, multi data center application, we've taken that down by 60%. So we can deliver server same day, but we can deliver entire environments, you know add it to backup, add it to DNS, and fully compliant within a couple of weeks which is you know something we measure very closely. >> Great job, man. I mean that's a compelling I think results. And in the journey obviously you got promoted a few times. >> Yep. >> All right, congratulations again. >> Thank you. >> Thanks Vijay. >> Hey Vijay, come back here. Actually we forgot our joke. So razzled by his data points there. So you're supposed to wear some shoes, right? >> I know my inner glitch. I was going to wear those sneakers, but I forgot them at the office maybe for the right reasons. But the story behind those florescent sneakers, I see they're focused on my shoes. But I picked those up two years ago at a Next event, and not my style. I took 'em to my office. They've been sitting in my office for the last couple years. >> Who's received shoes like these by the way? I'm sure you guys have received shoes like these. There's some real fans there. >> So again, I'm sure many of you liked them. I had 'em in my office. I've offered it to so many of my engineers. Are you size 11? Do you want these? And they're unclaimed? >> So that's the only feature of Nutanix that you-- >> That's the only thing that hasn't worked, other than that things are going extremely well. >> Good job, man. Thanks a lot. >> Thanks. >> Thanks Vijay. So as we get to the final phase which is obviously as we embark on this multi-cloud journey and the complexity that comes with it which Dheeraj hinted towards in his session. You know we have to take a cautious, thoughtful approach here because we don't want to over set expectations because this will take us five, 10 years to really do a good job like we've done in the first act. And the good news is that the market is also really, really early here. It's just a fact. And so we've taken a tiered approach to it as we'll start the discussion with multi-cloud operations, and we've talked about the stack in the prior session which is about look across new clouds. So it's no longer Nutanix, Dell, Lenova, HP, Cisco as the new quote, unquote platforms. It's Nutanix, Xi, GCP, AWS, Azure as the new platforms. That's how we're designing the fabric going forward. On top of that, you obviously have the hybrid OS both on the data plane side and control plane side. Then what you're seeing with the advent of Calm doing a marketplace and automation as well as Beam doing governance and compliance is the fact that you'll see more and more such capabilities of multi-cloud operations burnt into the platform. And example of that is Calm with the new 5.7 release that they had. Launch supports multiple clouds both inside and outside, but the fundamental premise of Calm in the multi-cloud use case is to enable you to choose the right cloud for the right workload. That's the automation part. On the governance part, and this we kind of went through in the last half an hour with Dheeraj and Vijay on stage is something that's even more, if I can call it, you know first order because you get the provisioning and operations second. The first order is to say look whatever my developers have consumed off public cloud, I just need to first get our arm around to make sure that you know what am I spending, am I secure, and then when I get comfortable, then I am able to actually expand on it. And that's the power of Beam. And both Beam and Calm will be the yin and yang for us in our multi-cloud portfolio. And we'll have new products to complement that down the road, right? But along the way, that's the whole private cloud, public cloud. They're the two ends of the barbell, and over time, and we've been working on Xi for awhile, is this conviction that we've built talking to many customers that there needs to be another type of cloud. And this type of a cloud has to feel like a public cloud. It has to be architected like a public cloud, be consumed like a public cloud, but it needs to be an extension of my data center. It should not require any changes to my tooling. It should not require and changes to my operational infrastructure, and it should not require lift and shift, and that's a super hard problem. And this problem is something that a chunk of our R and D team has been burning the midnight wick on for the last year and a half. Because look this is not about taking our current OS which does a good job of scaling and plopping it into a Equinix or a third party data center and calling it a hybrid cloud. This is about rebuilding things in the OS so that we can deliver a true hybrid cloud, but at the same time, give those functionality back on premises so that even if you don't have a hybrid cloud, if you just have your own data centers, you'll still need new services like DR. And if you think about it, what are we doing? We're building a full blown multi-tenant virtual network designed in a modern way. Think about this SDN 2.0 because we have 10 years worth of looking backwards on how GCP has done it, or how Amazon has done it, and now sort of embodying some of that so that we can actually give it as part of this cloud, but do it in a way that's a seamless extension of the data center, and then at the same time, provide new services that have never been delivered before. Everyone obviously does failover and failback in DR it just takes months to do it. Our goal is to do it in hours or minutes. But even things such as test. Imagine doing a DR test on demand for you business needs in the middle of the day. And that's the real bar that we've set for Xi that we are working towards in early access later this summer with GA later in the year. And to talk more about this, let me invite some of our core architects working on it, Melina and Rajiv. (rock music) Good to see you guys. >> You're messing up the names again. >> Oh Rajiv, Vinny, same thing, man. >> You need to back up your memory from Xi. >> Yeah, we should. Okay, so what are we going to talk about, Vinny? >> Yeah, exactly. So today we're going to talk about how Xi is pushing the envelope and beyond the state of the art as you were saying in the industry. As part of that, there's a whole bunch of things that we have done starting with taking a private cloud, seamlessly extending it to the public cloud, and then creating a hybrid cloud experience with one-click delight. We're going to show that. We've done a whole bunch of engineering work on making sure the operations and the tooling is identical on both sides. When you graduate from a private cloud to a hybrid cloud environment, you don't want the environments to be different. So we've copied the environment for you with zero manual intervention. And finally, building on top of that, we are delivering DR as a service with unprecedented simplicity with one-click failover, one-click failback. We're going to show you one click test today. So Melina, why don't we start with showing how you go from a private cloud, seamlessly extend it to consume Xi. >> Sounds good, thanks Vinny. Right now, you're looking at my Prism interface for my on premises cluster. In one-click, I'm going to be able to extend that to my Xi cloud services account. I'm doing this using my my Nutanix credential and a password manager. >> Vinny: So here as you notice all the Nutanix customers we have today, we have created an account for them in Xi by default. So you don't have to log in somewhere and create an account. It's there by default. >> Melina: And just like that we've gone ahead and extended my data center. But let's go take a look at the Xi side and log in again with my my Nutanix credentials. We'll see what we have over here. We're going to be able to see two availability zones, one for on premises and one for Xi right here. >> Vinny: Yeah as you see, using a log in account that you already knew mynutanix.com and 30 seconds in, you can see that you have a hybrid cloud view already. You have a private cloud availability zone that's your own Prism central data center view, and then a Xi availability zone. >> Sunil: Got it. >> Melina: Exactly. But of course we want to extend my network connection from on premises to my Xi networks as well. So let's take a look at our options there. We have two ways of doing this. Both are one-click experience. With direct connect, you can create a dedicated network connection between both environments, or VPN you can use a public internet and a VPN service. Let's go ahead and enable VPN in this environment. Here we have two options for how we want to enable our VPN. We can bring our own VPN and connect it, or we will deploy a VPN for you on premises. We'll do the option where we deploy the VPN in one-click. >> And this is another small sign or feature that we're building net new as part of Xi, but will be burned into our core Acropolis OS so that we can also be delivering this as a stand alone product for on premises deployment as well, right? So that's one of the other things to note as you guys look at the Xi functionality. The goal is to keep the OS capabilities the same on both sides. So even if I'm building a quote, unquote multi data center cloud, but it's just a private cloud, you'll still get all the benefits of Xi but in house. >> Exactly. And on this second step of the wizard, there's a few inputs around how you want the gateway configured, your VLAN information and routing and protocol configuration details. Let's go ahead and save it. >> Vinny: So right now, you know what's happening is we're taking the private network that our customers have on premises and extending it to a multi-tenant public cloud such that our customers can use their IP addresses, the subnets, and bring their own IP. And that is another step towards making sure the operation and tooling is kept consistent on both sides. >> Melina: Exactly. And just while you guys were talking, the VPN was successfully created on premises. And we can see the details right here. You can track details like the status of the connection, the gateway, as well as bandwidth information right in the same UI. >> Vinny: And networking is just tip of the iceberg of what we've had to work on to make sure that you get a consistent experience on both sides. So Melina, why don't we show some of the other things we've done? >> Melina: Sure, to talk about how we preserve entities from my on-premises to Xi, it's better to use my production environment. And first thing you might notice is the log in screen's a little bit different. But that's because I'm logging in using my ADFS credentials. The first thing we preserved was our users. In production, I'm running AD obviously on-prem. And now we can log in here with the same set of credentials. Let me just refresh this. >> And this is the Active Directory credential that our customers would have. They use it on-premises. And we allow the setting to be set on the Xi cloud services as well, so it's the same set of users that can access both sides. >> Got it. There's always going to be some networking problem onstage. It's meant to happen. >> There you go. >> Just launching it again here. I think it maybe timed out. This is a good sign that we're running on time with this presentation. >> Yeah, yeah, we're running ahead of time. >> Move the demos quicker, then we'll time out. So essentially when you log into Xi, you'll be able to see what are the environment capabilities that we have copied to the Xi environment. So for example, you just saw that the same user is being used to log in. But after the use logs in, you'll be able to see their images, for example, copied to the Xi side. You'll be able to see their policies and categories. You know when you define these policies on premises, you spend a lot of effort and create them. And now when you're extending to the public cloud, you don't want to do it again, right? So we've done a whole lot of syncing mechanisms making sure that the two sides are consistent. >> Got it. And on top of these policies, the next step is to also show capabilities to actually do failover and failback, but also do integrated testing as part of this compatibility. >> So one is you know just the basic job of making the environments consistent on two sides, but then it's also now talking about the data part, and that's what DR is about. So if you have a workload running on premises, we can take the data and replicate it using your policies that we've already synced. Once the data is available on the Xi side, at that point, you have to define a run book. And the run book essentially it's a recovery plan. And that says okay I already have the backups of my VMs in case of disaster. I can take my recovery plan and hit you know either failover or maybe a test. And then my application comes up. First of all, you'll talk about the boot order for your VMs to come up. You'll talk about networking mapping. Like when I'm running on-prem, you're using a particular subnet. You have an option of using the same subnet on the Xi side. >> Melina: There you go. >> What happened? >> Sunil: It's finally working.? >> Melina: Yeah. >> Vinny, you can stop talking. (audience clapping) By the way, this is logging into a live Xi data center. We have two regions West Coat, two data centers East Coast, two data centers. So everything that you're seeing is essentially coming off the mainstream Xi profile. >> Vinny: Melina, why don't we show the recovery plan. That's the most interesting piece here. >> Sure. The recovery plan is set up to help you specify how you want to recover your applications in the event of a failover or a test failover. And it specifies all sorts of details like the boot sequence for the VMs as well as network mappings. Some of the network mappings are things like the production network I have running on premises and how it maps to my production network on Xi or the test network to the test network. What's really cool here though is we're actually automatically creating your subnets on Xi from your on premises subnets. All that's part of the recovery plan. While we're on the screen, take a note of the .100 IP address. That's a floating IP address that I have set up to ensure that I'm going to be able to access my three tier web app that I have protected with this plan after a failover. So I'll be able to access it from the public internet really easily from my phone or check that it's all running. >> Right, so given how we make the environment consistent on both sides, now we're able to create a very simple DR experience including failover in one-click, failback. But we're going to show you test now. So Melina, let's talk about test because that's one of the most common operations you would do. Like some of our customers do it every month. But usually it's very hard. So let's see how the experience looks like in what we built. >> Sure. Test and failover are both one-click experiences as you know and come to expect from Nutanix. You can see it's failing over from my primary location to my recovery location. Now what we're doing right now is we're running a series of validation checks because we want to make sure that you have your network configured properly, and there's other configuration details in place for the test to be successful. Looks like the failover was initiated successfully. Now while that failover's happening though, let's make sure that I'm going to be able to access my three tier web app once it fails over. We'll do that by looking at my network policies that I've configured on my test network. Because I want to access the application from the public internet but only port 80. And if we look here under our policies, you can see I have port 80 open to permit. So that's good. And if I needed to create a new one, I could in one click. But it looks like we're good to go. Let's go back and check the status of my recovery plan. We click in, and what's really cool here is you can actually see the individual tasks as they're being completed from that initial validation test to individual VMs being powered on as part of the recovery plan. >> And to give you guys an idea behind the scenes, the entire recovery plan is actually a set of workflows that are built on Calm's automation engine. So this is an example of where we're taking some of power of workflow and automation that Clam has come to be really strong at and burning that into how we actually operationalize many of these workflows for Xi. >> And so great, while you were explaining that, my three tier web app has restarted here on Xi right in front of you. And you can see here there's a floating IP that I mentioned early that .100 IP address. But let's go ahead and launch the console and make sure the application started up correctly. >> Vinny: Yeah, so that .100 IP address is a floating IP that's a publicly visible IP. So it's listed here, 206.80.146.100. And that's essentially anybody in the audience here can go use your laptop or your cell phone and hit that and start to work. >> Yeah so by the way, just to give you guys an idea while you guys maybe use the IP to kind of hit it, is a real set of VMs that we've just failed over from Nutanix's corporate data center into our West region. >> And this is running live on the Xi cloud. >> Yeah, you guys should all go and vote. I'm a little biased towards Xi, so vote for Xi. But all of them are really good features. >> Scroll up a little bit. Let's see where Xi is. >> Oh Xi's here. I'll scroll down a little bit, but keep the... >> Vinny: Yes. >> Sunil: You guys written a block or something? >> Melina: Oh good, it looks like Xi's winning. >> Sunil: Okay, great job, Melina. Thank you so much. >> Thank you, Melina. >> Melina: Thanks. >> Thank you, great job. Cool and calm under pressure. That's good. So that was Xi. What's something that you know we've been doing around you know in addition to taking say our own extended enterprise public cloud with Xi. You know we do recognize that there are a ton of workloads that are going to be residing on AWS, GCP, Azure. And to sort of really assist in the try and call it transformation of enterprises to choose the right cloud for the right workload. If you guys remember, we actually invested in a tool over last year which became actually quite like one of those products that took off based on you know groundswell movement. Most of you guys started using it. It's essentially extract for VMs. And it was this product that's obviously free. It's a tool. But it enables customers to really save tons of time to actually migrate from legacy environments to Nutanix. So we took that same framework, obviously re-platformed it for the multi-cloud world to kind of solve the problem of migrating from AWS or GCP to Nutanix or vice versa. >> Right, so you know, Sunil as you said, moving from a private cloud to the public cloud is a lift and shift, and it's a hard you know operation. But moving back is not only expensive, it's a very hard problem. None of the cloud vendors provide change block tracking capability. And what that means is when you have to move back from the cloud, you have an extended period of downtime because there's now way of figuring out what's changing while you're moving. So you have to keep it down. So what we've done with our app mobility product is we have made sure that, one, it's extremely simple to move back. Two, that the downtime that you'll have is as small as possible. So let me show you what we've done. >> Got it. >> So here is our app mobility capability. As you can see, on the left hand side we have a source environment and target environment. So I'm calling my AWS environment Asgard. And I can add more environments. It's very simple. I can select AWS and then put in my credentials for AWS. It essentially goes and discovers all the VMs that are running and all the regions that they're running. Target environment, this is my Nutanix environment. I call it Earth. And I can add target environment similarly, IP address and credentials, and we do the rest. Right, okay. Now migration plans. I have Bifrost one as my migration plan, and this is how migration works. First you create a plan and then say start seeding. And what it does is takes a snapshot of what's running in the cloud and starts migrating it to on-prem. Once it is an on-prem and the difference between the two sides is minimal, it says I'm ready to cutover. At that time, you move it. But let me show you how you'd create a new migration plan. So let me name it, Bifrost 2. Okay so what I have to do is select a region, so US West 1, and target Earth as my cluster. This is my storage container there. And very quickly you can see these are the VMs that are running in US West 1 in AWS. I can select SQL server one and two, go to next. Right now it's looking at the target Nutanix environment and seeing it had enough space or not. Once that's good, it gives me an option. And this is the step where it enables the Nutanix service of change block tracking overlaid on top of the cloud. There are two options one is automatic where you'll give us the credentials for your VMs, and we'll inject our capability there. Or manually you could do. You could copy the command either in a windows VM or Linux VM and run it once on the VM. And change block tracking since then in enabled. Everything is seamless after that. Hit next. >> And while Vinny's setting it up, he said a few things there. I don't know if you guys caught it. One of the hardest problems in enabling seamless migration from public cloud to on-prem which makes it harder than the other way around is the fact that public cloud doesn't have things like change block tracking. You can't get delta copies. So one of the core innovations being built in this app mobility product is to provide that overlay capability across multiple clouds. >> Yeah, and the last step here was to select the target network where the VMs will come up on the Nutanix environment, and this is a summary of the migration plan. You can start it or just save it. I'm saving it because it takes time to do the seeding. I have the other plan which I'll actually show the cutover with. Okay so now this is Bifrost 1. It's ready to cutover. We started it four hours ago. And here you can see there's a SQL server 003. Okay, now I would like to show the AWS environment. As you can see, SQL server 003. This VM is actually running in AWS right now. And if you go to the Prism environment, and if my login works, right? So we can go into the virtual machine view, tables, and you see the VM is not there. Okay, so we go back to this, and we can hit cutover. So this is essentially telling our system, okay now it the time. Quiesce the VM running in AWS, take the last bit of changes that you have to the database, ship it to on-prem, and in on-prem now start you know configure the target VM and start bringing it up. So let's go and look at AWS and refresh that screen. And you should see, okay so the SQL server is now stopping. So that means it has quiesced and stopping the VM there. If you go back and look at the migration plan that we had, it says it's completed. So it has actually migrated all the data to the on-prem side. Go here on-prem, you see the production SQL server is running already. I can click launch console, and let's see. The Windows VM is already booting up. >> So essentially what Vinny just showed was a live cutover of an AWS VM to Nutanix on-premises. >> Yeah, and what we have done. (audience clapping) So essentially, this is about making two things possible, making it simple to migrate from cloud to on-prem, and making it painless so that the downtime you have is very minimal. >> Got it, great job, Vinny. I won't forget your name again. So last step. So to really talk about this, one of our favorite partners and customers has been in the cloud environment for a long time. And you know Jason who's the CTO of Cyxtera. And he'll introduce who Cyxtera is. Most of you guys are probably either using their assets or not without knowing their you know the new name. But is someone that was in the cloud before it was called cloud as one of the original founders and technologists behind Terremark, and then later as one of the chief architects of VMware's cloud. And then they started this new company about a year or so ago which I'll let Jason talk about. This journey that he's going to talk about is how a partner, slash customer is working with us to deliver net new transformations around the traditional industry of colo. Okay, to talk more about it, Jason, why don't you come up on stage, man? (rock music) Thank you, sir. All right so Cyxtera obviously a lot of people don't know the name. Maybe just give a 10 second summary of why you're so big already. >> Sure, so Cyxtera was formed, as you said, about a year ago through the acquisition of the CenturyLink data centers. >> Sunil: Which includes Savvis and a whole bunch of other assets. >> Yeah, there's a long history of those data centers, but we have all of them now as well as the software companies owned by Medina capital. So we're like the world's biggest startup now. So we have over 50 data centers around the world, about 3,500 customers, and a portfolio of security and analytics software. >> Sunil: Got it, and so you have this strategy of what we're calling revolutionizing colo deliver a cloud based-- >> Yeah so, colo hasn't really changed a lot in the last 20 years. And to be fair, a lot of what happens in data centers has to have a person physically go and do it. But there are some things that we can simplify and automate. So we want to make things more software driven, so that's what we're doing with the Cyxtera extensible data center or CXD. And to do that, we're deploying software defined networks in our facilities and developing automations so customers can go and provision data center services and the network connectivity through a portal or through REST APIs. >> Got it, and what's different now? I know there's a whole bunch of benefits with the integrated platform that one would not get in the traditional kind of on demand data center environment. >> Sure. So one of the first services we're launching on CXD is compute on demand, and it's powered by Nutanix. And we had to pick an HCI partner to launch with. And we looked at players in the space. And as you mentioned, there's actually a lot of them, more than I thought. And we had a lot of conversations, did a lot of testing in the lab, and Nutanix really stood out as the best choice. You know Nutanix has a lot of focus on things like ease of deployment. So it's very simple for us to automate deploying compute for customers. So we can use foundation APIs to go configure the servers, and then we turn those over to the customer which they can then manage through Prism. And something important to keep in mind here is that you know this isn't a manged service. This isn't infrastructure as a service. The customer has complete control over the Nutanix platform. So we're turning that over to them. It's connected to their network. They're using their IP addresses, you know their tools and processes to operate this. So it was really important for the platform we picked to have a really good self-service story for things like you know lifecycle management. So with one-click upgrade, customers have total control over patches and upgrades. They don't have to call us to do it. You know they can drive that themselves. >> Got it. Any other final words around like what do you see of the partnership going forward? >> Well you know I think this would be a great platform for Xi, so I think we should probably talk about that. >> Yeah, yeah, we should talk about that separately. Thanks a lot, Jason. >> Thanks. >> All right, man. (audience clapping) So as we look at the full journey now between obviously from invisible infrastructure to invisible clouds, you know there is one thing though to take away beyond many updates that we've had so far. And the fact is that everything that I've talked about so far is about completing a full blown true IA stack from all the way from compute to storage, to vitualization, containers to network services, and so forth. But every public cloud, a true cloud in that sense, has a full blown layer of services that's set on top either for traditional workloads or for new workloads, whether it be machine-learning, whether it be big data, you know name it, right? And in the enterprise, if you think about it, many of these services are being provisioned or provided through a bunch of our partners. Like we have partnerships with Cloudera for big data and so forth. But then based on some customer feedback and a lot of attention from what we've seen in the industry go out, just like AWS, and GCP, and Azure, it's time for Nutanix to have an opinionated view of the past stack. It's time for us to kind of move up the stack with our own offering that obviously adds value but provides some of our core competencies in data and takes it to the next level. And it's in that sense that we're actually launching Nutanix Era to simplify one of the hardest problems in enterprise IT and short of saving you from true Oracle licensing, it solves various other Oracle problems which is about truly simplifying databases much like what RDS did on AWS, imagine enterprise RDS on demand where you can provision, lifecycle manage your database with one-click. And to talk about this powerful new functionality, let me invite Bala and John on stage to give you one final demo. (rock music) Good to see you guys. >> Yep, thank you. >> All right, so we've got lots of folks here. They're all anxious to get to the next level. So this demo, really rock it. So what are we going to talk about? We're going to start with say maybe some database provisioning? Do you want to set it up? >> We have one dream, Sunil, one single dream to pass you off, that is what Nutanix is today for IT apps, we want to recreate that magic for devops and get back those weekends and freedom to DBAs. >> Got it. Let's start with, what, provisioning? >> Bala: Yep, John. >> Yeah, we're going to get in provisioning. So provisioning databases inside the enterprise is a significant undertaking that usually involves a myriad of resources and could take days. It doesn't get any easier after that for the longterm maintence with things like upgrades and environment refreshes and so on. Bala and team have been working on this challenge for quite awhile now. So we've architected Nutanix Era to cater to these enterprise use cases and make it one-click like you said. And Bala and I are so excited to finally show this to the world. We think it's actually Nutanix's best kept secrets. >> Got it, all right man, let's take a look at it. >> So we're going to be provisioning a sales database today. It's a four-step workflow. The first part is choosing our database engine. And since it's our sales database, we want it to be highly available. So we'll do a two node rack configuration. From there, it asks us where we want to land this service. We can either land it on an existing service that's already been provisioned, or if we're starting net new or for whatever reason, we can create a new service for it. The key thing here is we're not asking anybody how to do the work, we're asking what work you want done. And the other key thing here is we've architected this concept called profiles. So you tell us how much resources you need as well as what network type you want and what software revision you want. This is actually controlled by the DBAs. So DBAs, and compute administrators, and network administrators, so they can set their standards without having a DBA. >> Sunil: Got it, okay, let's take a look. >> John: So if we go to the next piece here, it's going to personalize their database. The key thing here, again, is that we're not asking you how many data files you want or anything in that regard. So we're going to be provisioning this to Nutanix's best practices. And the key thing there is just like these past services you don't have to read dozens of pages of best practice guides, it just does what's best for the platform. >> Sunil: Got it. And so these are a multitude of provisioning steps that normally one would take I guess hours if not days to provision and Oracle RAC data. >> John: Yeah, across multiple teams too. So if you think about the lifecycle especially if you have onshore and offshore resources, I mean this might even be longer than days. >> Sunil: Got it. And then there are a few steps here, and we'll lead into potentially the Time Machine construct too? >> John: Yeah, so since this is a critical database, we want data protection. So we're going to be delivering that through a feature called Time Machines. We'll leave this at the defaults for now, but the key thing to not here is we've got SLAs that deliver both continuous data protection as well as telescoping checkpoints for historical recovery. >> Sunil: Got it. So that's provisioning. We've kicked off Oracle, what, two node database and so forth? >> John: Yep, two node database. So we've got a handful of tasks that this is going to automate. We'll check back in in a few minutes. >> Got it. Why don't we talk about the other aspects then, Bala, maybe around, one of the things that, you know and I know many of you guys have seen this, is the fact that if you look at database especially Oracle but in general even SQL and so forth is the fact that look if you really simplified it to a developer, it should be as simple as I copy my production database, and I paste it to create my own dev instance. And whenever I need it, I need to obviously do it the opposite way, right? So that was the goal that we set ahead for us to actually deliver this new past service around Era for our customers. So you want to talk a little bit more about it? >> Sure Sunil. If you look at most of the data management functionality, they're pretty much like flavors of copy paste operations on database entities. But the trouble is the seemingly simple, innocuous operations of our daily lives becomes the most dreaded, complex, long running, error prone operations in data center. So we actually planned to tame this complexity and bring consumer grade simplicity to these operations, also make these clones extremely efficient without compromising the quality of service. And the best part is, the customers can enjoy these services not only for databases running on Nutanix, but also for databases running on third party systems. >> Got it. So let's take a look at this functionality of I guess snapshoting, clone and recovery that you've now built into the product. >> Right. So now if you see the core feature of this whole product is something we call Time Machine. Time Machine lets the database administrators actually capture the database tape to the granularity of seconds and also lets them create clones, refresh them to any point in time, and also recover the databases if the databases are running on the same Nutanix platform. Let's take a look at the demo with the Time Machine. So here is our customer relationship database management database which is about 2.3 terabytes. If you see, the Time Machine has been active about four months, and SLA has been set for continuously code revision of 30 days and then slowly tapers off 30 days of daily backup and weekly backups and so on, so forth. On the right hand side, you will see different colors. The green color is pretty much your continuously code revision, what we call them. That lets you to go back to any point in time to the granularity of seconds within those 30 days. And then the discreet code revision lets you go back to any snapshot of the backup that is maintained there kind of stuff. In a way, you see this Time Machine is pretty much like your modern day car with self driving ability. All you need to do is set the goals, and the Time Machine will do whatever is needed to reach up to the goal kind of stuff. >> Sunil: So why don't we quickly do a snapshot? >> Bala: Yeah, some of these times you need to create a snapshot for backup purposes, Time Machine has manual controls. All you need to do is give it a snapshot name. And then you have the ability to actually persist this snapshot data into a third party or object store so that your durability and that global data access requirements are met kind of stuff. So we kick off a snapshot operation. Let's look at what it is doing. If you see what is the snapshot operation that this is going through, there is a step called quiescing the databases. Basically, we're using application-centric APIs, and here it's actually RMAN of Oracle. We are using the RMan of Oracle to quiesce the database and performing application consistent storage snapshots with Nutanix technology. Basically we are fusing application-centric and then Nutanix platform and quiescing it. Just for a data point, if you have to use traditional technology and create a backup for this kind of size, it takes over four to six hours, whereas on Nutanix it's going to be a matter of seconds. So it almost looks like snapshot is done. This is full sensitive backup. You can pretty much use it for database restore kind of stuff. Maybe we'll do a clone demo and see how it goes. >> John: Yeah, let's go check it out. >> Bala: So for clone, again through the simplicity of command Z command, all you need to do is pick the time of your choice maybe around three o'clock in the morning today. >> John: Yeah, let's go with 3:02. >> Bala: 3:02, okay. >> John: Yeah, why not? >> Bala: You select the time, all you need to do is click on the clone. And most of the inputs that are needed for the clone process will be defaulted intelligently by us, right? And you have to make two choices that is where do you want this clone to be created with a brand new VM database server, or do you want to place that in your existing server? So we'll go with a brand new server, and then all you need to do is just give the password for you new clone database, and then clone it kind of stuff. >> Sunil: And this is an example of personalizing the database so a developer can do that. >> Bala: Right. So here is the clone kicking in. And what this is trying to do is actually it's creating a database VM and then registering the database, restoring the snapshot, and then recoding the logs up to three o'clock in the morning like what we just saw that, and then actually giving back the database to the requester kind of stuff. >> Maybe one finally thing, John. Do you want to show us the provision database that we kicked off? >> Yeah, it looks like it just finished a few seconds ago. So you can see all the tasks that we were talking about here before from creating the virtual infrastructure, and provisioning the database infrastructure, and configuring data protection. So I can go access this database now. >> Again, just to highlight this, guys. What we just showed you is an Oracle two node instance provisioned live in a few minutes on Nutanix. And this is something that even in a public cloud when you go to RDS on AWS or anything like that, you still can't provision Oracle RAC by the way, right? But that's what you've seen now, and that's what the power of Nutanix Era is. Okay, all right? >> Thank you. >> Thanks. (audience clapping) >> And one final thing around, obviously when we're building this, it's built as a past service. It's not meant just for operational benefits. And so one of the core design principles has been around being API first. You want to show that a little bit? >> Absolutely, Sunil, this whole product is built on API fist architecture. Pretty much what we have seen today and all the functionality that we've been able to show today, everything is built on Rest APIs, and you can pretty much integrate with service now architecture and give you your devops experience for your customers. We do have a plan for full fledged self-service portal eventually, and then make it as a proper service. >> Got it, great job, Bala. >> Thank you. >> Thanks, John. Good stuff, man. >> Thanks. >> All right. (audience clapping) So with Nutanix Era being this one-click provisioning, lifecycle management powered by APIs, I think what we're going to see is the fact that a lot of the products that we've talked about so far while you know I've talked about things like Calm, Flow, AHV functionality that have all been released in 5.5, 5.6, a bunch of the other stuff are also coming shortly. So I would strongly encourage you guys to kind of space 'em, you know most of these products that we've talked about, in fact, all of the products that we've talked about are going to be in the breakout sessions. We're going to go deep into them in the demos as well as in the pods. So spend some quality time not just on the stuff that's been shipping but also stuff that's coming out. And so one thing to keep in mind to sort of takeaway is that we're doing this all obviously with freedom as the goal. But from the products side, it has to be driven by choice whether the choice is based on platforms, it's based on hypervisors, whether it's based on consumption models and eventually even though we're starting with the management plane, eventually we'll go with the data plane of how do I actually provide a multi-cloud choice as well. And so when we wrap things up, and we look at the five freedoms that Ben talked about. Don't forget the sixth freedom especially after six to seven p.m. where the whole goal as a Nutanix family and extended family make sure we mix it up. Okay, thank you so much, and we'll see you around. (audience clapping) >> PA Announcer: Ladies and gentlemen, this concludes our morning keynote session. Breakouts will begin in 15 minutes. ♪ To do what I want ♪

Published Date : May 9 2018

SUMMARY :

PA Announcer: Off the plastic tab, would you please welcome state of Louisiana And it's my pleasure to welcome you all to And I'd like to second that warm welcome. the free spirit. the Nutanix Freedom video, enjoy. And I read the tagline from license to launch You have the freedom to go and choose and having to gain the trust with you over time, At the same time, you spent the last seven, eight years and apply intelligence to say how can we lower that you go and advise with some of the software to essentially reduce their you know they're supposed to save are still only 20%, 25% utilized. And the next thing is you can't do So you actually sized it for peak, and bring the control while retaining that agility So you want to show us something? And you know glad to be here. to see you know are there resources that you look at everyday. So billions of events, billing, metering events So what we have here is a very popular are everywhere, the cloud is everywhere actually. So when you bring your master account that you create because you don't want So we have you know consumption of the services. There's a lot of money being made So not only just get visibility at you know compute So all of you who actually have not gone the single pane view you know to mange What you see here is they're using have been active in Russia as well. to detect you know how can you rightsize So one click, you can actually just pick Yeah, and not only remove the resources the consumption for the Nutanix, you know the services And the most powerful thing is you can go to say how can you really remove things. So again, similar to save, you're saying So the idea is how can we give our people It looks like there's going to be a talk here at 10:30. Yes, so you can go and write your own security So the end in all this is, again, one of the things And to start the session, I think you know the part You barely fit in that door, man. that's grown from VDI to business critical So if we hop over here to our explore tab, in recent releases to kind of make this happen? Now to allow you to full take advantage of that, On the same environment though, we're going to show you So one of the shares that you see there is home directories. Do we have the cluster also showing, So if we think about cloud, cloud's obviously a big So just like the market took a left turn on Kubernetes, Now for the developer, the application architect, So the goal of ACS is to ensure So you can deploy however many of these He hasn't seen the movies yet. And this is going to be the number And if you come over to our office, and we welcome you, Thanks so much. And like Steve who's been with us for awhile, So I remember, so how many of you guys And the deployment is smaller than what we had And it covers a lot of use cases as well. So the use cases, we're 90%, 95% deployed on Nutanix, So the plan going forward, you actually asked And the same thing when you actually flip it to AHV And to give you a flavor of that, let me show you And now you can see this is a much simpler picture. Yeah, for those guys, you know that's not the Avengers This is next years theme. So before we cut over from Netsil to Flow, And that of course is the most important So that's like one click segmentation and play right now? You can compare it to other products in the space. in that next few releases. And if I scroll down again, and I see the top five of the network which is if you can truly isolate (audience clapping) And you know it's not just using Nutanix than in a picture by the way. So tell me a little bit about this cloud initiative. and the second award was really related to that. And a lot of this was obviously based on an infrastructure And you know initiatives change year on year, So the stack you know obviously built on Nutanix, of obviously the business takeaway here? There has to be some outcomes that we measure And in the journey obviously you got So you're supposed to wear some shoes, right? for the last couple years. I'm sure you guys have received shoes like these. So again, I'm sure many of you liked them. That's the only thing that hasn't worked, Thanks a lot. is to enable you to choose the right cloud Yeah, we should. of the art as you were saying in the industry. that to my Xi cloud services account. So you don't have to log in somewhere and create an account. But let's go take a look at the Xi side that you already knew mynutanix.com and 30 seconds in, or we will deploy a VPN for you on premises. So that's one of the other things to note the gateway configured, your VLAN information Vinny: So right now, you know what's happening is And just while you guys were talking, of the other things we've done? And first thing you might notice is And we allow the setting to be set on the Xi cloud services There's always going to be some networking problem onstage. This is a good sign that we're running So for example, you just saw that the same user is to also show capabilities to actually do failover And that says okay I already have the backups is essentially coming off the mainstream Xi profile. That's the most interesting piece here. or the test network to the test network. So let's see how the experience looks like details in place for the test to be successful. And to give you guys an idea behind the scenes, And so great, while you were explaining that, And that's essentially anybody in the audience here Yeah so by the way, just to give you guys Yeah, you guys should all go and vote. Let's see where Xi is. I'll scroll down a little bit, but keep the... Thank you so much. What's something that you know we've been doing And what that means is when you have And very quickly you can see these are the VMs So one of the core innovations being built So that means it has quiesced and stopping the VM there. So essentially what Vinny just showed and making it painless so that the downtime you have And you know Jason who's the CTO of Cyxtera. of the CenturyLink data centers. bunch of other assets. So we have over 50 data centers around the world, And to be fair, a lot of what happens in data centers in the traditional kind of on demand is that you know this isn't a manged service. of the partnership going forward? Well you know I think this would be Thanks a lot, Jason. And in the enterprise, if you think about it, We're going to start with say maybe some to pass you off, that is what Nutanix is Got it. And Bala and I are so excited to finally show this And the other key thing here is we've architected And the key thing there is just like these past services if not days to provision and Oracle RAC data. So if you think about the lifecycle And then there are a few steps here, but the key thing to not here is we've got So that's provisioning. that this is going to automate. is the fact that if you look at database And the best part is, the customers So let's take a look at this functionality On the right hand side, you will see different colors. And then you have the ability to actually persist of command Z command, all you need to do Bala: You select the time, all you need the database so a developer can do that. back the database to the requester kind of stuff. Do you want to show us the provision database So you can see all the tasks that we were talking about here What we just showed you is an Oracle two node instance (audience clapping) And so one of the core design principles and all the functionality that we've been able Good stuff, man. But from the products side, it has to be driven by choice PA Announcer: Ladies and gentlemen,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KarenPERSON

0.99+

JuliePERSON

0.99+

MelinaPERSON

0.99+

StevePERSON

0.99+

MatthewPERSON

0.99+

Julie O'BrienPERSON

0.99+

VinnyPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

DheerajPERSON

0.99+

RussiaLOCATION

0.99+

LenovoORGANIZATION

0.99+

MiamiLOCATION

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

2012DATE

0.99+

AcropolisORGANIZATION

0.99+

Stacy NighPERSON

0.99+

Vijay RayapatiPERSON

0.99+

StacyPERSON

0.99+

PrismORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RajivPERSON

0.99+

$3 billionQUANTITY

0.99+

2016DATE

0.99+

Matt VincePERSON

0.99+

GenevaLOCATION

0.99+

twoQUANTITY

0.99+

ThursdayDATE

0.99+

VijayPERSON

0.99+

one hourQUANTITY

0.99+

100%QUANTITY

0.99+

$100QUANTITY

0.99+

Steve PoitrasPERSON

0.99+

15 timesQUANTITY

0.99+

CasablancaLOCATION

0.99+

2014DATE

0.99+

Choice Hotels InternationalORGANIZATION

0.99+

Dheeraj PandeyPERSON

0.99+

DenmarkLOCATION

0.99+

4,000QUANTITY

0.99+

2015DATE

0.99+

DecemberDATE

0.99+

threeQUANTITY

0.99+

3.8 petabytesQUANTITY

0.99+

six timesQUANTITY

0.99+

40QUANTITY

0.99+

New OrleansLOCATION

0.99+

LenovaORGANIZATION

0.99+

NetsilORGANIZATION

0.99+

two sidesQUANTITY

0.99+

100 customersQUANTITY

0.99+

20%QUANTITY

0.99+

Chris Cummings, Chasm Institute | CUBE Conversation with John Furrier


 

(techy music playing) >> Hello, everyone, welcome to theCUBE Studios here in Palo Alto, California. I'm John Furrier, the cofounder of SiliconANGLE Media Inc., also cohost of theCUBE. We're here for a CUBE Conversation on Thought Leader Thursday and I'm here with Chris Cummings, who's a senior manager, advisor, big-time industry legend, but he's also the Chasm Group, right now, doer, Crossing the Chasm, famous books and it's all about the future. Formerly an exec at Netapp, been in the storage and infrastructure cloud tech business, also friends of Stanford. Season tickets together to go to the tailgates, but big Cal game coming up of course, but more importantly a big-time influence in the industry and we're going to do some drill down on what's going on with cloud computing, all the buzzword bingo going on in the industry. Also, AWS, Amazon Web Services re:Invent is coming up, do a little preview there, but really kind of share our views on what's happening in the industry, because there's a lot of noise out there. We're going to try to get the signal from the noise, thanks for watching. Chris, thanks for coming in. >> Thank you so much for having me, glad to be here. >> Great to see you, so you know, you have seen a lot of waves of innovation and right now you're working with a lot of companies trying to figure out the future. >> That's right. >> And you're seeing a lot of significant industry shifts. We talk about it on theCUBE all the time. Blockchain from decentralization all the way up to massive consolidation with hyper-convergence in the enterprise. >> Mm-hmm. >> So a lot of action, and because of the day the people out in the marketplace, whether it's a developer or a CXO, CIO, CDO, whatever enterprise leader's doing the transformations. >> Chris Collins: We got all of them. >> They're trying to essentially not go out of business. A lot of great things are happening, but at the same time a lot of pressure on the business is happening. So, let's discuss that, I mean, you are doing this for work at the Chasm Group. Talk about your role, you were formerly at Netapp, so I know you know the storage business. >> Right. >> So we're going to have a great conversation about storage and impact infrastructure, but at the Chasm Group how are you guys framing the conversation? >> Yeah, Chasm Group is really all about helping these companies process their thinking, think about if they're going to get to be a platform out in the industry. You can't just go and become a platform in the industry, you got to go knock down problem, problem, problem, solution, solution, solution. So we help them prioritize that and think about best practices for achieving that. >> You know, Dave Alante, my co-CEO, copartner, co-founder at SiliconANGLE Media and I always talk about this all the time, and the expression we use is if you don't know what check mate looks like you shouldn't be playing chess, and a lot of the IT folks and CIOs are in that mode now where the game has changed so much that sometimes they don't even know what they're playing. You know, they've been leaning on this Magic Quadrant from Gartner and all these other analyst firms and it's been kind of a slow game, a batch kind of game, now it's real time. Whatever metaphor you want to use, the game has changed so the chessboard has changed. >> Chris: Mm-hmm. >> So I got to get your take on this because you've been involved in strategy, been on product, you worked at growth companies, big companies, start-ups, and now looking at the bigger picture, what is the game? I mean, right now if you could lay out the chessboard, what are people looking at, what is the game? >> So, we deal a lot with customer conversations and that's where it all kind of begins, and I think what we found is this era of pushing product and just throwing stuff out there. It worked for a while but those days are over. These folks are so overwhelmed. The titles you mentioned, CIO, CDO, all the dev ops people, they're so overwhelmed with what's going on out there. What they want is people to come in and tell them about what's happening out there, what are their peers doing and what problems are they trying to solve in order and drive it that way. >> And there's a lot of disruption on the product side. >> Yes. >> So tech's changing, obviously the business models are changing, that's a different issue. Let's consider the tech things, you have-- >> Mm-hmm. >> A tech perspective, let's get into the tech conversation. You got cloud, you got private cloud, hybrid cloud, multi-cloud, micro-machine learning, hyper-machine learning, hyper-cloud, all these buzzwords are out there. It's buzzwords bingo. >> Chris: Right. >> But also the reality is you got Amazon Web Services absolutely crushing it, no doubt about it. I mean, I've been looking at Oracle, I've been looking at Google, I've been looking at SAP, looking at IBM, looking at Alibaba, looking at Microsoft, the game is really kind of a cloak and dagger situation going on here. >> That's right. >> A lot of things shifting on the provider side, but no doubt scale is the big issue. >> Chris: That's right. >> So how does a customer squint through all this? >> The conversations that I've had, especially with the larger enterprises, is they know that they've got to be able to adopt and utilize the public cloud capabilities, but they also want to retain that degree of control, so they want to maintain, whether it's their apps, their dev ops, some pieces of their infrastructure on prem, and as you talked about that transition it used to be okay, well we thought about cloud was equal to private cloud, then it became public cloud. Hybrid cloud, people are hanging on to hybrid cloud, sometimes for the right reasons and sometimes for the wrong reasons. Right reasons are because it's critical for their business. You look at somebody, for instance, in media and entertainment. They can't just push everything out there. They've got to retain control and really have their hands around that content because they've got to be able to distribute it, right? But then you look at some others that are hanging on for the wrong reasons, and the wrong reasons are they want to have their control and they want to have their salary and they want to have their staff, so boy, hybrid sounds like a mix that works. >> So I'm going to be having a one-on-one with Andy Jassy next week, exclusive. I do that every year as part of theCUBE. He's a great guy, good friend, become a good friend, because we've been a fan of him when no one loved Amazon. We saw the early, obviously at SiliconANGLE, now he's the king of the industry, but he's a great manager, great executive, and has done a great job on his ethos of Bezos and Amazon. Ship stuff faster, lower prices, the flywheel that Amazon uses. Everything's kind of on that-- And they own Twitch, which we stream, too, and we love. But if you could ask Andy any questions what questions would you ask him if you get to have that one-on-one? >> Yeah, well, it stems from conversations I've had with customers, which was probably once a week I would be talking to a CIO or somebody on that person's staff, and they'd slide the piece of paper across and say this is my bill. I had no idea that this was what AWS was going to drive me from a billing perspective, and I think we've seen... You know, we've had all kinds of commentary out there about ingress fees, egress fees, all of that sort of stuff. I think the question for Andy, when you look at the amount of revenue and operating margin that they are generating in that business, how are they going to start diversifying that pricing strategy so that they can keep those customers on without having them rethink their strategy in the future. >> So are you saying that when they slide that piece of paper over that the fees are higher than expected or not... Or low and happy, they're happy with the prices. >> Oh, they're-- I think they're-- I think it's the first time they've ever thought that it could be as expensive as on-premise infrastructure because they just didn't understand when they went into this how much it was going to cost to access that data over time, and when you're talking about data that is high volume and high frequency data, they are accessing it quite a bit, as opposed to just stale, cold, dead stuff that they want to put off somewhere else and not have to maintain. >> Yeah, and one of the things we're seeing that we pointed at the Wikibon team is a lot of these pricings are... The clients don't know that they're being billed for something that they may not be using, so AI or machine learning could come in potentially. So this is kind of what you're getting at. >> Exactly. >> The operational things that Amazon's doing to keep prices low for the customer, not get bill shock. >> Chris: That's right. >> Okay, so that's cool. What else would you ask him about culture or is there anything you would ask him about his plans... What else would you ask him? >> I think another big thing would be just more plans on what's going to be done around data analytics and big data. We can call it whatever we want, but they've been so good at the semi-structured or unstructured content, you know, when we think about AWS and where AWS was going with S3, but now there's a whole new phenomenon going on around this and companies are as every bit as scared about that transition as they were about the prior cloud transition, so what really are their plans there when they think about that, and for instance, things like how does GPU processing come into play versus CPU processing. There's going to be a really interesting discussion I think you're going to have with him on that front. >> Awesome, let's talk about IT. IT and information technology departments formerly known as DP, data processing, information-- All that stuff's changed, but there were still guys that were buying hardware, buying Netapp tries that you used to work for, buying EMC, doing data domain, doing a lot of stuff. These guys are essentially looking at potentially a role where-- I mean, for instance, we use Amazon. We're a big customer, happy customer. >> Chris: Mm-hmm. >> We don't have those guys. >> Chris: Right. >> So if I'm an IT guy I might be thinking shit, I could be out of a job, Amazon's doing my job, so I'm not saying that's the case but that's certainly a fear. >> Chris: Absolutely. >> But the business models have to shift from old IT to new IT. >> Chris: Mm-hmm. >> What does that game look like? What is this new IT game? Is it more, not a department view, is it more of a holistic view, and what's the sentiment around the buyers and your customers that you talk to around how do they message to the IT guys, like, look, there's higher valued jobs you could go to. >> Right. >> You mention analytics... >> That's right. >> What's the conversation? Certainly some guys won't make the transition and might not make it, but what's the narrative? >> Well, I think that's where it just starts with what segment are you talking about, so if you look at it and say just break it down between the large enterprise, the uber enterprise that we've seen for so long, mid-size and smaller, the mid-size and smaller are gone, okay. Outside of just specific industries where they really need that control, media and entertainment might be an example. That mid-size business is gone for those vendors, right? So those vendors are now having to grab on and say I'm part of that cloud phenomenon, my hyper-cloud of the future. I'm part of that phenomenon, and that becomes really the game that they have to play, but when you look at those IT shops I think they really need to figure out where are they adding value and where are they just enabling value that's being driven by cloud providers, and really that's all they are is a facilitator, and they've got to shift their energy towards where am I adding value, and that becomes more that-- >> That's differentiation, that's where differentiation is, so non-differentiated labor is the term that Wikibon analysts use. >> Oh, okay. >> That's going down, the differentiated labor is either revenue generating or something operationally more efficient, right? >> That's right, and it's all going to be revenue generating now. I mean, I used to be out there talking about things like archiving, and archiving's a great idea. It's something where I'm going to save money, okay, but I got this many projects on my list if I'm a CIO of where I can save money. I'm being under pressure about how am I going to go generate money, and that's where I think people are really shifting their eyeballs and their attention, is more towards that. >> And you got IOT coming down the pike. I mean, we're hearing is from what I hear from CIOs when we have a few in-depth conversations is look, I got to get my development team ramped up and being more cloud native, more microservice and I got to get more app development going that drives revenue for my business, more efficiency. >> Chris: Right. >> I have a digital transformation across the company in terms of hiring culture and talent. >> Chris: Mm-hmm. >> And then I got pressure to do IOT. >> Chris: Right. >> And I got security, so of those five things, IOT tends to fall out, security takes preference because of the security challenges, and then that's already putting their plate full right there. >> That's right, that's real time and those people are-- >> Those are core issues. >> Putting too much pressure on that right now and then you're thinking about IT and in the meantime, by the way, most of these places don't have the dev ops shop that's operating on a flywheel, right? So you're not... What's it, Goldman Sachs has 5,000 developers, right? That's bigger than most tech companies, so as a consequence you start thinking about well, not everybody looks like that. What the heck are they going to do in the future. They're going to have to be thinking about new ways of accessing that type of capability. >> This is where the cloud really shines in my mind. I think in the cloud, too, it's starting to fragment the conversations. People will try to pigeonhole Amazon. I see Microsoft-- I've been very critical of Microsoft in their cloud because-- First of all, I love the move that they're making. I think it's a smart move business-wise, but they bundle in 365 Office, that's not really cloud, it's just SAS, so then you start getting into the splitting of the hairs of well, SAS is not included in cloud. But come on, SAS is cloud. >> Chris: Mm-hmm. >> Well, maybe Amazon should include their ecosystem that would be a trillion dollar revenue number, so all companies don't look the same. >> That's right. >> And so from an enterprise that's a challenge. >> Chris: Mm-hmm. >> Do I got to hire developers for Asger, do I got to hire developers for Amazon, do I got to hire developers for Google. >> Chris: Mm-hmm. >> There's no stack consistency across private enterprises to cloud. >> Chris: So I have-- >> Because I'm a storage guy, I've got Netapp drives and now I've got an Amazon thing. I like Amazon, but now I got to go Asger, what the hell do I do? >> I got EMCs here and I got Nimbles there and HP and I've still got tape from IBM from five decades ago, so, John, I got a great term for you that's going to be a key one, I think, in the ability. It's called histocompatibility, and this is really about... >> Oh, here we go. Let's get nerdy with the tape glasses on. >> It's really about the ability to be able to inter-operate with all this system and some of these systems are live systems, they're current systems. Some of it's garbage that should've been thrown out a long time ago and actually recycled. So I think histocompatibility is going to be a really, really big deal. >> Well, keep the glasses on. Let's get down in the weeds here. >> Okay. >> I like the-- With the pocket protector, if you had the pocket protector we'd be in good shape. >> Yep. >> So, vendors got to compete with these buzzwords, become buzzword bingo, but there are trends that you're seeing. You've done some analysis of how the positionings and you're also a positioning guru as well. There's ways to do it and that's a challenge is for suppliers, vendors who want to serve customers. They got to rise above the noise. >> Chris: That's right. >> That's a huge problem. What are you seeing in terms of buzzword bingo-- >> Oh, my goodness. >> Because like I said, I used to work for HP in the old days and they used to have an expression, you know, don't call it what it is because that's boring and make it exciting, so the analogy they used was sushi is basically cold, dead fish. (laughing) So, sushi is a name for cold, dead fish. >> Chris: Yeah. >> So you don't call your product cold, dead fish, you call it sushi. >> Chris: Right. >> That was the analogy, so in our world-- >> Chris: That was HP-UX. >> That was HP-UX, you know, HP was very engineering. >> Yes. >> That's not-- Sushi doesn't mean anything. It's cold, dead fish, that's what it is. >> Right. >> That's what it does. >> That's right. >> So a lot of vendors can error in that they're accurate and their engineers, they call it what it is, but there's more sex appeal with some better naming. >> Totally. >> What are you seeing in terms of the fashion, if you will, in terms of the naming conventions. Which ones are standing out, what's the analysis. >> Well, I think the analysis is this, you start with your adjectives with STEM words, John, and what I mean by that is things like histocompatibility. It could start with things like agility, flexibility, manageability, simplicity, all those sorts of things, and they've got to line those terms up and go out there, but I think the thing that right now-- >> But those are boring, I saw a press release saying we're more agile, we're the most effective software platform with agility and dev ops, like what the hell does that mean? >> Yeah, I think you also have to combine it with a heavy degree of hyperbole, right? So hyperbole, an off-the-cuff statement that is so extreme that you'd never really want to be tested on it, so an easy way to do that is to add hyper in front of all that. So it's hyper-manageability, right, and so I think we're going to see a whole new class of words. There are 361 great adjectives with STEMs, but-- >> Go through the list. >> Honestly. >> Go through the list that you have. >> I mean, there's so many, John, it's... >> So hyper is an easy one, right? >> Hyper is easy, I think that's a very simple one. I think now we also see that micro is so big, right, because we're talking about microservices and that's really the big buzzword in the industry right now. So everything's going to be about micro-segmenting your apps and then allowing those apps to be manifest and consumed by an uber app, and ultimately that uber app is an ultra app, so I think ultra is going to be another term that we see heading into the spectrum as well. >> And so histocompatibility is a word you mentioned, just here in my notes. >> Yep. >> You mentioned, so histo means historical. >> Exactly. >> So it means legacy. >> Chris: That's right. >> So basically backwards compatible would be the boring kind of word. >> Chris: That's right. >> And histocompatibility means we got you covered from legacy to cloud, right. >> Uh-huh. >> Or whatever. >> You bet. >> Micro-segmentility really talks to the granularity of data-driven things, right? >> That's right, another one would be macro API ability, it's kind of a mouthful, but everyone needs an API. I think we've seen that and because they're consuming so many different pieces and trying to assemble those they've got to have something that sits above. So macro API ability, I think, is another big one, and then lastly is this notion of mobility, right. We talk about-- As you said earlier, we talked about clouds and going from-- It's not just good enough to talk about hybrid cloud now, it's about multi-cloud. Well, multi-cloud means we're thinking about how we can place these apps and the data in all kinds of different spaces, but I've got to be able to have those be mobile, so hyper-mobility becomes a key for these applications as well. >> So hyper-scale we've seen, we've seen hyper-convergence. Hyper is the most popular-- >> Chris: Absolutely. >> Adjective with STEM, right? >> Chris: It's big. >> STEM words, okay, micro makes sense because, you know, micro-targeting, micro-segmentation, microservices, it speaks to the level of detail. >> Chris: Right. >> I love that one. >> Chris: Right. >> Which ones aren't working in your mind? We see anything that's so dead on arrival... >> Sure, I think there's a few that aren't working anymore. You got your agility, you got your flexibility, you got your manageability, and you got your simplicity. Okay, I could take all four of those and toss those over there in the trash because every vendor will say that they have those capabilities for you, so how does that help you distinguish yourself from anyone else. >> So that's old hat. >> It's just gone. >> Yeah, never fight fashion, as Jeremy Burton at EMC, now at Dell Technologies, said on theCUBE. I love that, so these are popular words. This is a way to stand out and be relevant. >> That's right. >> This is the challenge for vendors. Be cool and relevant but not be offensive. >> Yeah. >> All right, so what's your take on the current landscape for things like how do companies market themselves. Let's say they get the hyper in all the naming and the STEM words down. They have something compelling. >> Chris: Right. >> Something that's differentiated, something unique, how do companies stand out above the crowd, because the current way is advertising's not working. We're seeing fake news, you're seeing the analyst firms kind of becoming more old, slower, not relevant. I mean, does the Magic Quadrant really solve that problem or are they just putting that out there? If I'm a marketer, I'm a B2B marketer. >> Yeah. >> Obviously besides working with theCUBE and our team, so obviously great benefits. Plug there, but seriously, what do you advise? >> Yeah, I think the biggest thing is, you know, you think about marketing as not only reaching your target market, but also enabling your sales force and your channel partners, and frankly, the best thing that I've found in doing that, John, is starting every single piece that we would come up with with a number. How much value are we generating, whether it's zero clicks to get this thing installed. It's 90% efficiency, and then prove it. Don't just throw it out there and say isn't that good enough, but numbers matter because they're meaningful and they stimulate the conversation, and that's ultimately what all of this is. It's a conversation about is this going to be relevant for you, so that's the thing that I start with. >> So you're say being in the conversation matters. >> Absolutely. >> Yeah. >> Absolutely. >> What's the thought leadership view, what's your vision on how a company should be looking at thought leadership. Obviously you're seeing more of a real-time-- I call it the old world was batch marketing. >> Chris: Mm-hmm. >> E-mail marketing, do the normal things, get the white papers, do those things. You know, go to events, have a booth, and then the new way is real-time. >> Chris: Mm-hmm. >> Things are happening very fast-- >> That's right. >> In the market, people are connected now. It's a global, basically, message group. >> That's right. >> Twitter, LinkedIn, Facebook and all this stuff. >> It's really an unfulfilled need that you guys are really looking to fill, which is to provide that sort of real-time piece of it, but I think vendors trip over themselves and they think about I need a 50 page vision. They don't need a 50 page vision. What they need is here are a couple of dimensions on which this industry is going to change, and then commit to them. I think the biggest problem that many vendors have is they won't commit, they hedge, as opposed to they go all in behind those and one thing we talk about at Chasm Institute is if you're going to fail, fail fast, and that really means that you commit full time behind what you're pushing. >> Yeah, and of course what the Chasm, what it's based upon, you got to get to mainstream, get to early pioneers, cross the chasm. The other paradigm that I always loved from Jeffrey Moore was inside the tornado. Get inside the tornado because if you don't get in you're going to be spun out, so you've got to kind of get in the game, if you will. >> Chris: That's right. >> Don't overthink it, and this is where the iteration mindset comes in, "agile" start-up or "agile" venture. Okay, cool, so let's take a step back and reset to end the segment here. >> Mm-hmm. >> Re:Invent's coming up, obviously that's the big show of the year. VMworld, someone was commenting on Facebook VMworld 2008 was the big moment where they're comparing Amazon now to VMworld in 2008. >> Chris: Right. >> But you know, Pat Gelsinger essentially cut a great deal with Andy Jassy on Vmware. >> Chris: Right. >> And everything's clean, everything's growing, they're kicking ass. >> Chris: Mm-hmm. >> They got a private cloud and they got the hybrid cloud with Amazon. >> Yeah, it's that VMcloud on Amazon, that really seems to be the thing that's really driving their move into the future, and I think we're going to see from both of those folks, you are going to see so much on containers. Containerization, ultra-containers, hyper-containers, whatever it may be. If you're not speaking container language, then you are yesterday's news, right? >> And Kubernetes' certainly the orchestration piece right underneath it to kind of manage it. Okay, final point, what's in store for the legacy, because you're seeing a few major trends that we're pointing out and we're watching very closely, which really I put into two buckets. I know Wikibon's a more disciplined approach, I'm more simple about that. The decentralization trend we're seeing with Blockchain, which is kind of crazy and bubbly but very infrastructure relevant, this decentralized, disrupting, non-decentralized incumbence, so that's one trend and the other one is what cloud's doing to legacy IT vendors, Oracle, you know, these traditional manufacturers like that HP and Dell and all these guys, and Netapp which is transforming. So you've got disruption on both sides, cloud and like a decentralized model, apps, what's the position, view, from your standpoint, for these legacy guys? >> It's going to be quite an interesting one. I think they have to ride the wave, and I'll steal this from Peter Levine, from Andreessen, right? He talks about the end of cloud computing, and really what that is is just basically saying everything is going to be moving to the edge and there's going to be so much more compute at the edge with IOT and you can think about autonomous vehicles as the ultimate example of that, where you're talking about more powerful computers, certainly, than this that are sitting in cars all over the place, so that's going to be a big change, and those vendors that have been selling into the core data center for so long are going to have to figure out their way of being relevant in that universe and move towards that. And like we were talking about before, commit to that. >> Yeah. >> Right, don't just hedge, but commit to it and move. >> What's interesting is that I was talking with some executives at Alibaba when I was in China for part of the Alibaba Cloud Conference and Amazon had multiple conversations with Andy Jassy and his team over the years. It's interesting, a lot of people don't understand the nuances of kind of what's going on in cloud, and what I'm seeing is it's essentially, to your point, it's a compute game. >> Chris: Yeah. >> Right, so if you look at Intel for instance, Alibaba told me on my interview, they don't view Intel as a chip company anymore, they're a compute company, right, and CJ Bruno, one of the executives there, reaffirmed that. So Intel's looking at the big picture saying the cloud's a computer. Intel Inside is a series of compute, and you mentioned that the edge, Jassy is building a set of services with his team around core compute, which has storage, so this is essentially hyper-converged cloud. >> That's right. >> This is a pretty big thing. What's the one thing that people might not understand about this. If you could kind of illuminate this trend. I mean, the old Intel now turned into the new Intel, which is a monster franchise continuing to grow. >> Mm-hmm. >> Amazon, people see the numbers, they go oh, my god, they're a leader, but they have so much more headroom. >> Chris: Right, right. >> And they've got everyone else playing catch up. >> Yeah. >> What's the real phenomenon going on here? >> I think you're going to see more of this aggregation phenomenon where one vendor can't solve this entire problem. I mean, look at most recently, in the last two weeks, Intel and AMD getting together. Who would've thought that would happen? But they're just basically admitting we got a real big piece of the equation, Intel, and then AMD can fulfill this niche because they're getting killed by NVIDIA, but you're going to see just more of these industry conglomerations getting together to try and solve the problem. >> Just to end the segment, this is a great point. NVIDIA had a niche segment, graphics, now competing head to head with Intel. >> Chris: That's right. >> So essentially what's happening is the landscape is completely changing. Once competitors no longer-- New entrants, new competitors coming in. >> Chris: Mm-hmm. >> So this is a massive shift. >> Chris: It is. >> Okay, Chris Cummings here inside theCUBE. I'm John Furrier of CUBE Conversation. There's a massive shift happening, the game has changed and it's incumbent upon start-ups, venture capital, you know, Blockchain, ICOs or whatever's going on. Look at the new chessboard, look at the game and figure it out. Of course, we'll be broadcasting live at AWS re:Invent in a couple weeks. Stay tuned, more coverage, thanks for watching. (techy music playing)

Published Date : Nov 16 2017

SUMMARY :

and it's all about the future. and right now you're working with a lot all the way up to massive consolidation So a lot of action, and because of the day but at the same time a lot of pressure You can't just go and become a platform in the industry, and the expression we use is if you don't know and I think what we found is this era Let's consider the tech things, you have-- A tech perspective, let's get into the tech conversation. But also the reality is you got but no doubt scale is the big issue. and sometimes for the wrong reasons. So I'm going to be having a one-on-one in that business, how are they going to start diversifying that piece of paper over that the fees and not have to maintain. Yeah, and one of the things we're seeing to keep prices low for the customer, not get bill shock. What else would you ask him about culture about the prior cloud transition, that you used to work for, buying EMC, so I'm not saying that's the case But the business models have to how do they message to the IT guys, like, and that becomes really the game that they have to play, is the term that Wikibon analysts use. That's right, and it's all going to and I got to get more app development going I have a digital transformation across the company because of the security challenges, What the heck are they going to do in the future. First of all, I love the move that they're making. so all companies don't look the same. Do I got to hire developers for Asger, private enterprises to cloud. I like Amazon, but now I got to go Asger, so, John, I got a great term for you that's going to Let's get nerdy with the tape glasses on. It's really about the ability Let's get down in the weeds here. With the pocket protector, if you had You've done some analysis of how the positionings What are you seeing in terms of buzzword bingo-- so the analogy they used was So you don't call your product It's cold, dead fish, that's what it is. and their engineers, they call it what it is, What are you seeing in terms of the fashion, and they've got to line those terms up and go out there, and so I think we're going to see a whole new class of words. and that's really the big buzzword you mentioned, just here in my notes. So basically backwards compatible we got you covered from legacy to cloud, right. but I've got to be able to have those be mobile, Hyper is the most popular-- microservices, it speaks to the level of detail. We see anything that's so dead on arrival... so how does that help you distinguish I love that, so these are popular words. This is the challenge for vendors. the naming and the STEM words down. I mean, does the Magic Quadrant really solve that problem Plug there, but seriously, what do you advise? so that's the thing that I start with. I call it the old world was batch marketing. get the white papers, do those things. In the market, people are connected now. and that really means that you commit Get inside the tornado because if you don't get in and reset to end the segment here. that's the big show of the year. But you know, Pat Gelsinger essentially And everything's clean, everything's growing, got the hybrid cloud with Amazon. that really seems to be the thing And Kubernetes' certainly the orchestration piece all over the place, so that's going to be a big change, the nuances of kind of what's going on in cloud, and CJ Bruno, one of the executives there, reaffirmed that. I mean, the old Intel now turned into the new Intel, Amazon, people see the numbers, I mean, look at most recently, in the last two weeks, now competing head to head with Intel. the landscape is completely changing. the game has changed and it's incumbent upon start-ups,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Jeremy BurtonPERSON

0.99+

AmazonORGANIZATION

0.99+

Chris CummingsPERSON

0.99+

Chris CollinsPERSON

0.99+

Andy JassyPERSON

0.99+

AlibabaORGANIZATION

0.99+

Dave AlantePERSON

0.99+

Chasm GroupORGANIZATION

0.99+

Peter LevinePERSON

0.99+

IBMORGANIZATION

0.99+

AndyPERSON

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

90%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

ChinaLOCATION

0.99+

HPORGANIZATION

0.99+

AMDORGANIZATION

0.99+

Jeffrey MoorePERSON

0.99+

JassyPERSON

0.99+

OracleORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

next weekDATE

0.99+

CJ BrunoPERSON

0.99+

Robert Walsh, ZeniMax | PentahoWorld 2017


 

>> Announcer: Live from Orlando, Florida it's theCUBE covering Pentaho World 2017. Brought to you by Hitachi Vantara. (upbeat techno music) (coughs) >> Welcome to Day Two of theCUBE's live coverage of Pentaho World, brought to you by Hitachi Vantara. I'm your host Rebecca Knight along with my co-host Dave Vellante. We're joined by Robert Walsh. He is the Technical Director Enterprise Business Intelligence at ZeniMax. Thanks so much for coming on the show. >> Thank you, good morning. >> Good to see ya. >> I should say congratulations is in order (laughs) because you're company, ZeniMax, has been awarded the Pentaho Excellence Award for the Big Data category. I want to talk about the award, but first tell us a little bit about ZeniMax. >> Sure, so the company itself, so most people know us by the games versus the company corporate name. We make a lot of games. We're the third biggest company for gaming in America. And we make a lot of games such as Quake, Fallout, Skyrim, Doom. We have game launching this week called Wolfenstein. And so, most people know us by the games versus the corporate entity which is ZeniMax Media. >> Okay, okay. And as you said, you're the third largest gaming company in the country. So, tell us what you do there. >> So, myself and my team, we are primarily responsible for the ingestion and the evaluation of all the data from the organization. That includes really two main buckets. So, very simplistically we have the business world. So, the traditional money, users, then the graphics, people, sales. And on the other side we have the game. That's where a lot of people see the fun in what we do, such as what people are doing in the game, where in the game they're doing it, and why they're doing it. So, get a lot of data on gameplay behavior based on our playerbase. And we try and fuse those two together for the single viewer or customer. >> And that data comes from is it the console? Does it come from the ... What's the data flow? >> Yeah, so we actually support many different platforms. So, we have games on the console. So, Microsoft, Sony, PlayStation, Xbox, as well as the PC platform. Mac's for example, Android, and iOS. We support all platforms. So, the big challenge that we have is trying to unify that ingestion of data across all these different platforms in a unified way to facilitate downstream the reporting that we do as a company. >> Okay, so who ... When it says you're playing the game on a Microsoft console, whose data is that? Is it the user's data? Is it Microsoft's data? Is it ZeniMax's data? >> I see. So, many games that we actually release have a service act component. Most of our games are actually an online world. So, if you disconnect today people are still playing in that world. It never ends. So, in that situation, we have all the servers that people connect to from their desktop, from their console. Not all but most data we generate for the game comes from the servers that people connect to. We own those. >> Dave: Oh, okay. >> Which simplifies greatly getting that data from the people. >> Dave: So, it's your data? >> Exactly. >> What is the data telling you these days? >> Oh, wow, depends on the game. I think people realize what people do in games, what games have become. So, we have one game right now called Elder Scrolls Online, and this year we released the ability to buy in-game homes. And you can buy furniture for your in-game homes. So, you can furnish them. People can come and visit. And you can buy items, and weapons, and pets, and skins. And what's really interesting is part of the reason why we exist is to look at patterns and trends based on people interact with that environment. So for example, we'll see America playerbase buy very different items compared to say the European playerbase, based on social differences. And so, that helps immensely for the people who continuously develop the game to add items and features that people want to see and want to leverage. >> That is fascinating that Americans and Europeans are buying different furniture for their online homes. So, just give us some examples of the difference that you're seeing between these two groups. >> So, it's not just the homes, it applies to everything that they purchase as well. It's quite interesting. So, when it comes to the Americans versus Europeans for example what we find is that Europeans prefer much more cosmetic, passive experiences. Whereas the Americans are much things that stand out, things that are ... I'm trying to avoid stereotypes right now. >> Right exactly. >> It is what it is. >> Americans like ostentatious stuff. >> Robert: Exactly. >> We get it. >> Europeans are a bit more passive in that regard. And so, we do see that. >> Rebecca: Understated maybe. >> Thank you, that's a much better way of putting it. But games often have to be tweaked based on the environment. A different way of looking at it is a lot of companies in career in Asia all of these games in the West and they will have to tweak the game completely before it releases in these environments. Because players will behave differently and expect different things. And these games have become global. We have people playing all over the world all at the same time. So, how do you facilitate it? How do you support these different users with different needs in this one environment? Again, that's why BI has grown substantially in the gaming industry in the past five, ten years. >> Can you talk about the evolution of how you've been able to interact and essentially affect the user behavior or response to that behavior. You mentioned BI. So, you know, go back ten years it was very reactive. Not a lot of real time stuff going on. Are you now in the position to effect the behavior in real time, in a positive way? >> We're very close to that. We're not quite there yet. So yes, that's a very good point. So, five, ten years ago most games were traditional boxes. You makes a game, you get a box, Walmart or Gamestop, and then you're finished. The relationship with the customer ends. Now, we have this concept that's used often is games as a service. We provide an online environment, a service around a game, and people will play those games for weeks, months, if not years. And so, the shift as well as from a BI tech standpoint is one item where we've been able to streamline the ingest process. So, we're not real time but we can be hourly. Which is pretty responsive. But also, the fact that these games have become these online environments has enabled us to get this information. Five years ago, when the game was in a box, on the shelf, there was no connective tissue between us and them to interact and facilitate. With the games now being online, we can leverage BI. We can be more real time. We can respond quicker. But it's also due to the fact that now games themselves have changed to facilitate that interaction. >> Can you, Robert, paint a picture of the data pipeline? We started there with sort of the different devices. And you're bringing those in as sort of a blender. But take us through the data pipeline and how you're ultimately embedding or operationalizing those analytics. >> Sure. So, the game theater, the game and the business information, game theater is most likely 90, 95% of our total data footprint. We generate a lot more game information than we do business information. It's just due to how much we can track. We can do so. And so, a lot of these games will generate various game events, game logs that we can ingest into a single data lake. And we can use Amazon S3 for that. But it's not just a game theater. So, we have databases for financial information, account users, and so we will ingest the game events as well as the databases into one single location. At that point, however, it's still very raw. It's still very basic. We enable the analysts to actually interact with that. And they can go in there and get their feet wet but it's still very raw. The next step is really taking that raw information that is disjointed and separated, and unifying that into a single model that they can use in a much more performant way. In that first step, the analysts have the burden of a lot of the ETL work, to manipulate the data, to transform it, to make it useful. Which they can do. They should be doing the analysis, not the ingesting the data. And so, the progression from there into our warehouse is the next step of that pipeline. And so in there, we create these models and structures. And they're often born out of what the analysts are seeing and using in that initial data lake stage. So, they're repeating analysis, if they're doing this on a regular basis, the company wants something that's automated and auditable and productionized, then that's a great use case for promotion into our warehouse. You've got this initial staging layer. We have a warehouse where it's structured information. And we allow the analysts into both of those environments. So, they can pick their poison in respects. Structured data over here, raw and vast over here based on their use case. >> And what are the roles ... Just one more follow up, >> Yeah. >> if I may? Who are the people that are actually doing this work? Building the models, cleaning the data, and shoring data. You've got data scientists. You've got quality engineers. You got data engineers. You got application developers. Can you describe the collaboration between those roles? >> Sure. Yeah, so we as a BI organization we have two main groups. We have our engineering team. That's the one I drive. Then we have reporting, and that's a team. Now, we are really one single unit. We work as a team but we separate those two functions. And so, in my organization we have two main groups. We have our big data team which is doing that initial ingestion. Now, we ingest billions of troves of data a day. Terabytes a data a day. And so, we have a team just dedicated to ingestion, standardization, and exposing that first stage. Then we have our second team who are the warehouse engineers, who are actually here today somewhere. And they're the ones who are doing the modeling, the structuring. I mean the data modeling, making the data usable and promoting that into the warehouse. On the reporting team, basically we are there to support them. We provide these tool sets to engage and let them do their work. And so, in that team they have a very split of people do a lot of report development, visualization, data science. A lot of the individuals there will do all those three, two of the three, one of the three. But they do also have segmentation across your day to day reporting which has to function as well as the more deep analysis for data science or predictive analysis. >> And that data warehouse is on-prem? Is it in the cloud? >> Good question. Everything that I talked about is all in the cloud. About a year and a half, two years ago, we made the leap into the cloud. We drunk the Kool-Aid. As of Q2 next year at the very latest, we'll be 100% cloud. >> And the database infrastructure is Amazon? >> Correct. We use Amazon for all the BI platforms. >> Redshift or is it... >> Robert: Yes. >> Yeah, okay. >> That's where actually I want to go because you were talking about the architecture. So, I know you've mentioned Amazon Redshift. Cloudera is another one of your solutions provider. And of course, we're here in Pentaho World, Pentaho. You've described Pentaho as the glue. Can you expand on that a little bit? >> Absolutely. So, I've been talking about these two environments, these two worlds data lake to data warehouse. They're both are different in how they're developed, but it's really a single pipeline, as you said. And so, how do we get data from this raw form into this modeled structure? And that's where Pentaho comes into play. That's the glue. If the glue between these two environments, while they're conceptually very different they provide a singular purpose. But we need a way to unify that pipeline. And so, Pentaho we use very heavily to take this raw information, to transform it, ingest it, and model it into Redshift. And we can automate, we can schedule, we can provide error handling. And so it gives us the framework. And it's self-documenting to be able to track and understand from A to B, from raw to structured how we do that. And again, Pentaho is allowing us to make that transition. >> Pentaho 8.0 just came out yesterday. >> Hmm, it did? >> What are you most excited about there? Do you see any changes? We keep hearing a lot about the ability to scale with Pentaho World. >> Exactly. So, there's three things that really appeal to me actually on 8.0. So, things that we're missing that they've actually filled in with this release. So firstly, we on the streaming component from earlier the real time piece we were missing, we're looking at using Kafka and queuing for a lot of our ingestion purposes. And Pentaho in releasing this new version the mechanism to connect to that environment. That was good timing. We need that. Also too, get into more critical detail, the logs that we ingest, the data that we handle we use Avro and Parquet. When we can. We use JSON, Avro, and Parquet. Pentaho can handle JSON today. Avro, Parquet are coming in 8.0. And then lastly, to your point you made as well is where they're going with their system, they want to go into streaming, into all this information. It's very large and it has to go big. And so, they're adding, again, the ability to add worker nodes and scale horizontally their environment. And that's really a requirement before these other things can come into play. So, those are the things we're looking for. Our data lake can scale on demand. Our Redshift environment can scale on demand. Pentaho has not been able to but with this release they should be able to. And that was something that we've been hoping for for quite some time. >> I wonder if I can get your opinion on something. A little futures-oriented. You have a choice as an organization. You could just take roll your own opensource, best of breed opensource tools, and slog through that. And if you're an internet giant or a huge bank, you can do that. >> Robert: Right. >> You can take tooling like Pentaho which is end to end data pipeline, and this dramatically simplifies things. A lot of the cloud guys, Amazon, Microsoft, I guess to a certain extent Google, they're sort of picking off pieces of the value chain. And they're trying to come up with as a service fully-integrated pipeline. Maybe not best of breed but convenient. How do you see that shaking out generally? And then specifically, is that a challenge for Pentaho from your standpoint? >> So, you're right. That why they're trying to fill these gaps in their environment. To what Pentaho does and what they're offering, there's no comparison right now. They're not there yet. They're a long way away. >> Dave: You're saying the cloud guys are not there. >> No way. >> Pentaho is just so much more functional. >> Robert: They're not close. >> Okay. >> So, that's the first step. However, though what I've been finding in the cloud, there's lots of benefits from the ease of deployment, the scaling. You use a lot of dev ops support, DBA support. But the tools that they offer right now feel pretty bare bones. They're very generic. They have a place but they're not designed for singular purpose. Redshift is the only real piece of the pipeline that is a true Amazon product, but that came from a company called Power Excel ten years ago. They licensed that from a separate company. >> Dave: What a deal that was for Amazon! (Rebecca and Dave laugh) >> Exactly. And so, we like it because of the functionality Power Excel put in many year ago. Now, they've developed upon that. And it made it easier to deploy. But that's the core reason behind it. Now, we use for our big data environment, we use Data Breaks. Data Breaks is a cloud solution. They deploy into Amazon. And so, what I've been finding more and more is companies that are specialized in application or function who have their product support cloud deployment, is to me where it's a sweet middle ground. So, Pentaho is also talking about next year looking at Amazon deployment solutioning for their tool set. So, to me it's not really about going all Amazon. Oh, let's use all Amazon products. They're cheap and cheerful. We can make it work. We can hire ten engineers and hack out a solution. I think what's more applicable is people like Pentaho, whatever people in the industry who have the expertise and are specialized in that function who can allow their products to be deployed in that environment and leverage the Amazon advantages, the Elastic Compute, storage model, the deployment methodology. That is where I see the sweet spot. So, if Pentaho can get to that point, for me that's much more appealing than looking at Amazon trying to build out some things to replace Pentaho x years down the line. >> So, their challenge, if I can summarize, they've got to stay functionally ahead. Which they're way ahead now. They got to maintain that lead. They have to curate best of breed like Spark, for example, from Databricks. >> Right. >> Whatever's next and curate that in a way that is easy to integrate. And then look at the cloud's infrastructure. >> Right. Over the years, these companies that have been looking at ways to deploy into a data center easily and efficiently. Now, the cloud is the next option. How do they support and implement into the cloud in a way where we can leverage their tool set but in a way where we can leverage the cloud ecosystem. And that's the gap. And I think that's what we look for in companies today. And Pentaho is moving towards that. >> And so, that's a lot of good advice for Pentaho? >> I think so. I hope so. Yeah. If they do that, we'll be happy. So, we'll definitely take that. >> Is it Pen-ta-ho or Pent-a-ho? >> You've been saying Pent-a-ho with your British accent! But it is Pen-ta-ho. (laughter) Thank you. >> Dave: Cheap and cheerful, I love it. >> Rebecca: I know -- >> Bless your cotton socks! >> Yes. >> I've had it-- >> Dave: Cord and Bennett. >> Rebecca: Man, okay. Well, thank you so much, Robert. It's been a lot of fun talking to you. >> You're very welcome. >> We will have more from Pen-ta-ho World (laughter) brought to you by Hitachi Vantara just after this. (upbeat techno music)

Published Date : Oct 27 2017

SUMMARY :

Brought to you by Hitachi Vantara. He is the Technical Director for the Big Data category. Sure, so the company itself, gaming company in the country. And on the other side we have the game. from is it the console? So, the big challenge that Is it the user's data? So, many games that we actually release from the people. And so, that helps examples of the difference So, it's not just the homes, And so, we do see that. We have people playing all over the world affect the user behavior And so, the shift as well of the different devices. We enable the analysts to And what are the roles ... Who are the people that are and promoting that into the warehouse. about is all in the cloud. We use Amazon for all the BI platforms. You've described Pentaho as the glue. And so, Pentaho we use very heavily about the ability to scale the data that we handle And if you're an internet A lot of the cloud So, you're right. Dave: You're saying the Pentaho is just So, that's the first step. of the functionality They have to curate best of breed that is easy to integrate. And that's the gap. So, we'll definitely take that. But it is Pen-ta-ho. It's been a lot of fun talking to you. brought to you by Hitachi

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Rebecca KnightPERSON

0.99+

RebeccaPERSON

0.99+

Robert WalshPERSON

0.99+

RobertPERSON

0.99+

DavePERSON

0.99+

PentahoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AsiaLOCATION

0.99+

WalmartORGANIZATION

0.99+

AmericaLOCATION

0.99+

ZeniMax MediaORGANIZATION

0.99+

ZeniMaxORGANIZATION

0.99+

Power ExcelTITLE

0.99+

second teamQUANTITY

0.99+

GoogleORGANIZATION

0.99+

twoQUANTITY

0.99+

two main groupsQUANTITY

0.99+

two groupsQUANTITY

0.99+

WolfensteinTITLE

0.99+

oneQUANTITY

0.99+

Orlando, FloridaLOCATION

0.99+

SonyORGANIZATION

0.99+

two functionsQUANTITY

0.99+

threeQUANTITY

0.99+

bothQUANTITY

0.99+

90, 95%QUANTITY

0.99+

next yearDATE

0.99+

Kool-AidORGANIZATION

0.99+

100%QUANTITY

0.99+

iOSTITLE

0.99+

todayDATE

0.99+

DoomTITLE

0.99+

yesterdayDATE

0.99+

Hitachi VantaraORGANIZATION

0.99+

two main bucketsQUANTITY

0.98+

GamestopORGANIZATION

0.98+

FalloutTITLE

0.98+

two environmentsQUANTITY

0.98+

first stepQUANTITY

0.98+

one itemQUANTITY

0.98+

Five years agoDATE

0.98+

AndroidTITLE

0.98+

one gameQUANTITY

0.98+

Pentaho WorldTITLE

0.98+

three thingsQUANTITY

0.98+

first stageQUANTITY

0.98+

Pen-ta-ho WorldORGANIZATION

0.98+

Pentaho Excellence AwardTITLE

0.98+

this yearDATE

0.98+