Rajiv Ramaswami, Nutanix | Supercloud22
[digital Music] >> Okay, welcome back to "theCUBE," Supercloud 22. I'm John Furrier, host of "theCUBE." We got a very special distinguished CUBE alumni here, Rajiv Ramaswami, CEO of Nutanix. Great to see you. Thanks for coming by the show. >> Good to be here, John. >> We've had many conversations in the past about what you guys have done. Again, the perfect storm is coming, innovation. You guys are in an interesting position and the Supercloud kind of points this out. We've been discussing about how multi-cloud is coming. Everyone has multiple clouds, but there's real structural change happening right now in customers. Now there's been change that's happened, cloud computing, cloud operations, developers are doing great, but now something magical's happening in the industry. We wanted to get your thoughts on that, that's called Supercloud. >> Indeed. >> How do you see this shift? I mean, devs are doing great. Ops and security are trying to get cloud native. What's happening in your opinion? >> Yeah, in fact, we've been talking about something very, very similar. I like the term supercloud. We've been calling it hybrid multicloud essentially, but the point being, companies are running their applications and managing their data. This is lifeblood for them. And where do they sit? Of course, some of these will sit in the public cloud. Some of these are going to sit inside their data centers and some of these applications increasingly are going to run in edges. And now what most companies struggle with is every cloud is different, their on-prem is different, their edge is different and they then have a scarcity of staff. Operating models are different. Security is different. Everything about it is different. So to your point, people are using multiple clouds and multiple locations. But you need to think about cloud as an operating model and what the supercloud or hyper multicloud delivers is really a consistent model, consistent operating model. One way for IT teams to operate across all of these environments and deliver an agile infrastructure as a service model to their developers. So that from a company's managed point of view, they can run their stuff wherever they want to, completely with consistency, and the IT teams can help support that easily. >> You know, it's interesting. You see a lot of transformation, certainly from customers, they were paying a lot of operating costs for IT. Now CapEx is covered by, I mean, CapEx now is covered by the cloud, so it's OpEx. They're getting core competencies and they're becoming very fluent in cloud technologies. And at the same time the vendors are saying, "Hey, you know, buy our stuff." And so you have the change over, how people relate to each other, vendors and customers, where there's a shared model where, okay, you got use cases for the cloud and use cases on-premise, both CapEx, both technology. You mentioned that operating model, Where's the gap? 'Cause nobody wants complexity, and you know, the enterprise, people love to add, solve complexity with more complexity. >> That's exactly the problem. You just hit the nail on the head, which is enterprise software tends to be very complex. And fundamentally complexity has been a friend for vendors, but the point being, it's not a friend for a company that's trying to manage their IT infrastructure. It's an an enemy because complexity means you need to train your staff, you need very specialized teams, and guess what? Talent is perhaps the most scarce thing out there, right? People talk about, you know, in IT, they always talk about people, process, technology. There's plenty of technology out there, but right now there's a big scarcity of people, and I think that talent is a major issue. And not only that, you know, it's not that we have as many specialized people who know storage, who know compute, who know networking. Instead, what you're getting is a bunch of new college grads coming in, who have generalized skill sets, who are used to having a consumer like experience with their experience with software and applications, and they want to see that from their enterprise software vendors. >> You know, it's just so you mentioned that when the hyper converged, we saw that movie that was bringing things together. Now you're seeing the commoditization of compute storage and networking, but yet the advancement of higher level services and things like Kubernetes for orchestration, that's an operating opportunity for people to get more orchestration, but that's a trade off. So we're seeing a new trend in the supercloud where it's not all Kubernetes all the time. It's not all AWS all the time. It's the new architecture, where there's trade offs. How do you see some of these key trade offs? I know you talked to a lot of your customers, they're kind of bringing things together, putting things together, kind of a day zero mentality. What are some of those key trade offs and architectural decision points? >> So there's a couple of points there, I think. First is that most customers are on a journey of thoughts and their journey is, well, they want to have a modern infrastructure. Many of them have on-prem footprints, and they're looking to modernize that infrastructure. They're looking to adopt cloud operating models. They're looking to figure out how they can extend and leverage these public clouds appropriately. The problem is when they start doing this, they find that everything is different. Every little piece, every cloud is different, their on-prem is different, and this results in a lot of complexity. In some ways, we at Nutanix solved this problem within data centers by converging separate silos of high computer storage and network. That's what we did with HCI. And now this notion of supercloud is just simply about converging different clouds and different data. >> Kind of the same thing. >> And on-prem and edges, right? Trying to bring all of these together rather than having separate teams, separate processes, separate technologies for every one of these, try to create consistency, and it makes life a lot simpler and easier. >> Yeah, I wanted to connect those dots because I think this is kind of interesting with the supercloud was, you get good at something in one cloud, then you bring that best practice and figure out how to make that work across edge and on-premise, which is, I mean, basically cloud operations. >> Exactly. It's cloud operations, which is why we say it's a cloud is an operating model. It's a way you operate your environment, but that environment could be anywhere. You're not restricted to it being in the public cloud. It's in your data center, that's in the edges. >> Okay, so when I hear about substrates, abstraction layers, I think two things, innovation cause you extract away complexity, then I also think about from the customer's perspective, maybe, lock-in. >> Yes. >> Whoa, oh, promises, promises. Lock in is a fear and ops teams and security teams, they know the downside of lock-in. >> Yes. >> Choice is obviously important. Devs don't care. I mean, like, whatever runs the software, go faster, but ops and security teams, they want choice, but they want functionality. So, what's that trade off? Talk about this lock-in dynamic, and how to get around. >> Yeah. >> And I think that's been some of the fundamental tenants of what we do. I mean, of course, people don't like lock-in, but they also want simplicity. And we provide both. Our philosophy is we want to make things as simple as possible. And that's one of the big differentiators that we have compared to other players. Our whole mission inside the company is to make things simple. But at the same time, we also want to provide customers with that flexibility and every layer in the stack, you don't want to lock to your point. So, if at the very bottom hardware, choice of hardware. Choice of hardware could be any of the vendors you work with or public cloud, Bare Metal. When you look at hypervisor, lots of choices. You got VMware, you got our own Ahv, which is KBM-based open source hypervisor, no lock-in there, provide complete flexibility. Then we have a storage stack, a distributor storage stack, which we provide. And then of course layers about that. Kubernetes, pick your Kubernetes, runtime of choice. Pick your Kubernetes, orchestrator and management of choice. So our whole goal is to provide that flexibility at every layer in the stack, allowing the customer to make the choice. They can decide how much they want to go with the full stack or how much they want to go piecemeal it, and there's a trade off there. And they get more flexibility, but at the cost of a little bit more complexity, and that, I think, is the trade off that each customer has to weigh. >> Okay, you guys have been transforming for many, many years. We've been covering on SiliconANGLE and theCUBE to software. >> Yes. >> I know you have hardware as well, but also software services. And you've been on the cloud bandwagon years ago, and now you made a lot of progress. What's the current strategy for you guys? How do you fit in? 'Cause public cloud has great use cases, great examples of success there, but that's not the only game in town. You've got on-premise and edge. What are you guys doing? What specifically are customers leaning on you for? How are you providing that value? What's the innovation strategy? >> Very simply, we provide a cloud software platform today. We don't actually sell anymore hardware. They're not on our books anymore. We're a pure software company. So we sell a cloud soft platform on top of which our customers can run all their applications, including the most mission critical applications. And they can use our platform wherever, to your point, on the supercloud. I keep coming back to that. We started out with our on-prem genes. That's where we started. We've extended that to Azure and AWS. And we are extending, of course, we've always been very strong when it came to the edge and extending that out to the edge. And so today we have a cloud platform that allows our customers to run these apps, whatever the apps may be, and manage all their data because we provide structured and unstructured data, blocks, files, objects, are all part of the platform. And we provide that in a consistent way across all of these locations, and we deliver the cloud operating model. >> So on the hardware thing, you guys don't have hardware anymore. >> We don't sell hardware anymore. We work with a whole range of hardware partners, HP, Dell, Supermicro, name it, Lenovo. >> Okay, so if I'm like a Telco and I want to build a data center at my tower, which could be only a few boxes, who do I buy that from? >> So you buy the software from us and you can buy the hardware from your choice of hardware partners. >> So yeah, whoever's selling the servers at that point. >> Yeah. >> Okay, so you send on the server. >> Yeah, we send on the server. >> Yeah, sound's good. So no hardware, so back to software that could transfer. How's that going, good? >> It's gone very well because, you know, we made two transformations. One is of course we were selling appliances when we started out, and then we started selling software, and now it's all fully subscription. So we're 100% subscription company. So our customers are buying subscriptions. They have the flexibility to get whatever duration they want. Again, to your philosophy, there's no lock-in. There is no long term lock-in here. We are happy if a customer chooses us for a year versus three years, whatever they like. >> I know that you've been on the road with customers this summer. It's been great to get out and see people in person. What are you learning? What are they viewing? What's their new Instagram picture of Nutanix? How do they see you? And how do you want them to see you? >> What they've seen us in the past has been, we created this whole category of HCI, Hyperconverged Infrastructure. They see us as a leader there and they see us as running some of their applications, not necessarily all their applications, especially at the very big customers. In the smaller customers, they run everything on us, but in the bigger customers, they run some workload, some applications on us. And now what they see is that we are now, if taking them on the journey, not only to run all their applications, whatever, they may be, including the most mission critical database workloads or analytics workloads on our platform, but also help them extend that journey into the public cloud. And so that's the journey we are on, modernized infrastructure. And this is what most of our customers are on. Modernizing the infrastructure, which we help and then creating a cloud operating model, and making that available everywhere. >> Yeah, and I think one, that's a great, and again, that's a great segue to supercloud, which I want to get your thoughts on because AWS, for example, spent all that CapEx, they're called the hyperscaler. They got H in there and that's a hyperscale in there. And now you can leverage that CapEx by bringing Nutanix in, you're a hyperscale-like solution on-premise and edge. So you take advantage of both. >> Absolutely. >> The success. >> Exactly. >> And a trajectory of cloud, so your customers, if I get this right, have all the economies of scale of cloud, plus the benefits of the HCI software kind of vibe. >> Absolutely. And I'll give you some examples how this plays out in the real world based on all my travels here. >> Yeah, please do. So we just put out a case study on a customer called FSP. They're a betting company, online betting company based out of the UK. And they run on our platform on-prem, but what they saw was they had to expand their operations to Asia and they went to Taiwan. And the problem for them was, they were told they had to get in business in Taiwan within a matter of a month, and they didn't know how to do it. And then they realized that they could just take the exact same software that they were running on our platform, and run it in an AWS region sitting in Taiwan. And they were up in business in less than a month, and they had now operations ready to go in Asia. I mean, that's a compelling business value. >> That's agile, that's agile. >> Agile. >> That's agile and a great... >> Versus the alternative would be weeks, months. >> Months, first of all, I mean, just think about, they have to open a data center, which probably takes them, they have to buy the hardware, which, you know, with supply chain deliveries, >> Supply chain. and God knows how long that takes. >> Oh God, yeah. >> So compared to all that here, they were up and running within a matter of a month. It's a, just one example of a very compelling value proposition. >> So you feel good about where you guys are right now relative to these big waves coming? >> Yeah, I think so. Well, I mean, you know, there's a lot of big waves coming and. >> What are the biggest ones that you see? >> Well, I mean, I think there's clearly one of the big ones, of course, out there is Broadcom buying VMware or potentially buying VMware and great company. I used to work there for many years and I have a lot of respect for what VMware has done for the industry in terms of virtualization of servers and creating their entire portfolio. >> Is it true you're hiring a lot of VMware folks? >> Yes, I mean a lot of them coming over now in anticipation, we've been hiring our fair share, but they're going other places too. >> A lot of VMware alumni at Nutanix now. >> Yes, there are certainly, we have our share of VMware alumni. We also have a share of alumni from others. >> We call the V mafia, by the way. (laughs) >> I dunno about the V mafia, but. But it's a great company, but I think right now a lot of customers are wondering what's going to happen, and therefore, they are looking at potentially what are the other alternatives? And we are very much front and center in that discussions. >> Well, Dave Alante and I, and the team have been very bullish on on-premise cloud operations. You guys are doing there. How would you describe the supercloud concept to a customer when they say, "Hey, what's the supercloud? "It's becoming a thing. "How would you describe what it is and the benefits?" >> Yeah, and I think the first thing is to tell them, what problem are you looking to solve? And the problem for them is, they have applications everywhere. They have data everywhere. How do their teams run and deal with all of this? And what they find is the way they're doing it today is different operating platform for every one of these. If you're on Amazon, it's one platform. If you're an Azure, it's another. If you're on-prim, it's a third. If you want to go to the edge, probably fourth, and it's a messy, complex thing for their IT teams. What a supercloud does is essentially unify all of these into a consistent operating model. You get a cloud operating model, you get the agility and the benefits, but with one way of handling your compute storage network needs, one way of handling your security policies, and security constructs, and giving you that, so such a dramatic simplification on the one side, and it's a dramatic enabler because it now enables you to run these applications wherever you want completely free. >> Yeah. It really bridges the cloud native. It kind of the interplay on the cloud between SAS and IAS, solves a lot of problems, highly integrated, that takes that model to the complexity of multiple environments. >> Exactly. >> That's a super cool environment. >> (John speaks over Rajiv) Across any environment, wherever. It's changing this thing from cloud being associated with the public cloud to cloud being available everywhere in a consistent way. >> And that's essentially the goodness of cloud, going everywhere. >> Yeah. >> Yeah, but that extension is what you call a supercloud. >> Rajiv, thank you so much for your time. I know you're super valuable, and you got a company to run. One final question for you. The edge is exploding. >> Yes. >> It's super dynamic. We kind of all know it's there. The industrial edge. You got the IOT edge and just the edge in general. On-premise, I think, is hybrid, it's the steady state, looking good. Everything's good. It's getting better, of course, things with cloud native and all that good stuff. What's your view of the edge? It's super dynamic, a lot of shifting, OT, IT, that's actually transformed. >> Yes, absolutely. >> Huge industrial thing. Amazon is buying, you know, industrial robots now. >> Yes. >> Space is around the corner, a lot of industrial advance with machine learning and the software side of things, so the edge is exploding. >> Yeah, you know, and I think one of the interesting things about that exploding edge is that it tends to be both compute and data heavy. It's not this notion of very thin edges. Yes, you've got thin edges too, of course, which may just be sensors on the one hand, but you're seeing an increased need for compute and storage at the edges, because a lot of these are crunching, crunching applications that require a crunch and generate a lot of data, crunch a lot of data. There's latency requirements that require you and there's even people deploying GPUs at the edges for image recognition and so forth, right? So this is. >> The edge is the data center now. >> Exactly. Think of the edge starting to look at the edge of the mini data center, but one that needs to be highly automated. You're not going to be able to put people at every one of these locations. You've got to be able to do all your services, lifecycle management, everything completely remove. >> Self-healing, all this good stuffs. >> Exactly. It has to be completely automated and self-healing and upgradeable and you know, life cycle managed from the cloud, so to speak. And so there's going to be this interlinkage between the edge and the cloud, and you're going to actually, essentially what you need is a cloud managed edge. >> Yeah, and this is where the super cloud extends, where you can extend the value of what you're building to these dynamically new emerging, and it's just the beginning. There'll be more. >> Oh, there's a ton of new applications emerging there. And I think that's going to be, I mean, there's people out there who code that half of data is going to be generated at the edge in a couple of years. >> Well, Rajiv, I am excited that you can bring the depth of technical architectural knowledge to the table on supercloud, as well as run a company. Congratulations on your success, and thanks for sharing with us and being part of our community. >> No, thank you, John, for having me on your show. >> Okay. Supercloud 22, we're continuing to open up the conversation. There is structural change happening. We're going to watch it. We're going to make it an open conversation. We're not going to make a decision. We're going to just let everyone discuss it and see how it evolves and on the best in the business discussing it, and we're going to keep it going. Thanks for watching. (digital music)
SUMMARY :
Thanks for coming by the show. and the Supercloud kind How do you see this shift? and the IT teams can and you know, the enterprise, Talent is perhaps the most It's not all AWS all the time. and they're looking to and it makes life a is kind of interesting It's a way you operate your environment, from the customer's Lock in is a fear and ops and how to get around. of the vendors you work with Okay, you guys have been transforming What's the current strategy for you guys? that out to the edge. So on the hardware thing, of hardware partners, and you can buy the hardware the servers at that point. So no hardware, so back to They have the flexibility to get And how do you want them to see you? And so that's the journey we are on, And now you can leverage that have all the economies of scale of cloud, in the real world and they didn't know how to do it. that's agile. Versus the alternative and God knows how long that takes. So compared to all that here, Well, I mean, you know, and I have a lot of respect Yes, I mean a lot of them of VMware alumni. We call the V mafia, by the way. I dunno about the V mafia, but. and the team have been very bullish on And the problem for them is, It kind of the interplay on It's changing this thing the goodness of cloud, is what you call a supercloud. and you got a company to run. and just the edge in general. Amazon is buying, you know, and the software side of things, and generate a lot of data, Think of the edge starting from the cloud, so to speak. and it's just the beginning. And I think that's going to be, I mean, excited that you can bring for having me on your show. and on the best in the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Rajiv Ramaswami | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Rajiv | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Dave Alante | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Supermicro | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
less than a month | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
each customer | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.99+ |
Bare Metal | ORGANIZATION | 0.98+ |
fourth | QUANTITY | 0.98+ |
one cloud | QUANTITY | 0.98+ |
two transformations | QUANTITY | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
agile | TITLE | 0.97+ |
one way | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
ORGANIZATION | 0.97+ | |
FSP | ORGANIZATION | 0.97+ |
first thing | QUANTITY | 0.97+ |
SiliconANGLE | ORGANIZATION | 0.96+ |
supercloud | ORGANIZATION | 0.96+ |
Agile | TITLE | 0.95+ |
a month | QUANTITY | 0.95+ |
third | QUANTITY | 0.95+ |
OpEx | ORGANIZATION | 0.95+ |
One final question | QUANTITY | 0.94+ |
HCI | ORGANIZATION | 0.94+ |
one side | QUANTITY | 0.93+ |
Supercloud22 | ORGANIZATION | 0.91+ |
One way | QUANTITY | 0.9+ |
Hyperconverged Infrastructure | ORGANIZATION | 0.9+ |
big | EVENT | 0.9+ |
one example | QUANTITY | 0.89+ |
Supercloud 22 | ORGANIZATION | 0.87+ |
big waves | EVENT | 0.8+ |
Azure | TITLE | 0.79+ |
Shimon Ben David | KubeCon + CloudNativeCon NA 2021
welcome back to los angeles lisa martin here with dave nicholson day three of the cube's coverage of kubecon cloud native con north america 2020 we've been having some great comp live conversations in the last three days with actual guests on set we're very pleased to welcome to for the first time to our program shimon ben david the cto of weka welcome hey nice to be here nice to be here great to be at an in-person event isn't it no it's awesome they've done a great job i think you're green you're green like we're green fully green which is fantastic actually purple and hearts wake up yeah good to know green means you're shaking hands and maybe the occasional hug so talk to us about weka what's going on we'll kind of dig into what you guys are doing with kubernetes but give us that overview of what's going on at weka io okay so weka has been around for several years already uh we actually jade our product of 2016 so it's been out there uh actually eight of the fortune 50 are using weka um for those of you that don't know weka by the way we're a fully software defined parallel file system cloud native i know it's a mouthful and it's buzzword compliant but we actually baked all of that into the product from day one because we did other storage companies in the past and we actually wanted to take the best of all worlds and put that into one storage that is is not another me too it's not another compromise so we built the the environment we built weka to actually accommodate for upcoming technologies so we identified also that cloud technology is upcoming network actually exploded in a good way one gig 10 gig 100 gig 200 gig came out so we knew that that's going to be a trend and also cloud we saw cloud being utilized more and more and we kind of like bet that being able to be a parallel file system for the cloud would be amazing and it does how are you not on me too tell me tell us that when you're talking with customers what are the like the top three things that really differentiate weka speed scale and simplicity speed skills i like how fast you said that like quicker so speed sorry you see a lot of file system a lot of storage environments that are very um throughput oriented so speed how many gigabytes can you do to be honest a lot of storage environments are saying we can do that in that many gigabytes when we designed weka actually we wanted to provide an environment that would actually be faster than your local nvme on your local server because that's what we see are actually customers using for performance they're copying the data for their local to their local nvmes and process it we created an environment that is actually throughput oriented iops oriented latency sensitive and metadata performance so it's kind of like the best of all worlds and it's just not just a claim we actually showed it in many benchmarks uh top 500s supercomputing centers can talk for hours about performance but that's performance um scalability we actually are able to scale uh and we did show that we scaled to multiple petabytes we actually uh took some projects from scale-out nas appliances that actually got to their limit of their scale out and we we just continued from there double digit triple digits petabytes upcoming um and also scale is also how many clients can you service at once so it's not only how much capacity but also how many clients can you can you work with concurrently and simplicity all of that we from the initial design points were let's make something that is usable by users and not like so my mother can really use it right and so we have a very simple intuitive user interface but it's also api driven so you can automate around it so simplicity speed and scale love it so shimon it's interesting you said that your company was founded in 2016 in that in that time period because uh before jade ga ga 2016. um but but in those in in those surrounding years uh there were a lot of companies that were coming out at sort of the tail end of the legacy storage world yeah trying to just cannibalize that business you came out looking into the future where are we in that future now because you could argue that you guys maybe started a little early you could have taken a couple of years off and waited for uh for for the wave in the world of containerization as an example to come through but this is really this is like your time to shine isn't it exactly and being fully software defined we can always um adapt and we're always adapting so we bet on new technologies networking flash environments and these keep just keep on going and improving right when we went out we were like in 10 gig environments with ssds but we already knew that we're going to go to 100 and we also designed already for nvmes so kind like hardware constantly improved uh cpus for example the new intel cpus the new amd cpus we just accommodated for them because being software defined means that we actually bypass most of their inner workings and do things ourselves so that's awesome and then the cloud environment is growing massively and containers we see containers now in everyday uh use cases where initially it was maybe vms maybe bare metal but now everything is containerized and we're actually starting to see more and more kubernetes orchestrated environment uh coming out as well i still have a feeling that this is still a bit of dev property hey i'm a developer i'm a devops engineer i'm going to do it uh and it's there is i actually saw a lot of exciting things here um taking it to the next level to the it environment so um that's where we will show benefit as well so talk about how kubernetes users are are working with weka what is what superpower does that give them so um i think if you look at the current storage solutions that you have for kubernetes um they're interesting but they're more of like the let's take what we have today and plug it in right um so what kind of has a csi uh plug-in so it's easy to integrate and work with but also when you look at it um block is still being used in in kubernetes environments that i'm familiar with block was still being used for high performance so i i used uh pvs and pvcs to manage my pods uh claims and then but then i mounted them as read write once right because i couldn't share them then if a pod failed i had to reclaim the pvc and connect it to multiple environments because i wanted block storage because it's fast and then nfs environments was were used as read write many uh to be a shared environment but low performance so by being able to say hey we now have an environment that is fully covered kubernetes integrated and it provides all the performance aspects that you need you don't need to choose just run your fleet of pods your cluster of pods read write many you don't need to to manage old reclamations just to create new pods you get the best of all words ease of use and also uh the performance additionally because there's always more right we now see more and more uh cloud environments right so weka also has the ability and i didn't focus on that but it's it's really uh amazing it has the ability to move data around between different environments so imagine and we see that imagine on-prem environments that are now using weka you're in the terabytes or petabyte scale obviously you can copy and rsync and rclone right but nobody really does it because it doesn't work for these capacities so weka has the ability to say hey i can move data around between different environments so create more copies or simply burst so we see customers that are working on-prem throwing data to the cloud we see customers working on the cloud and and then we actually now see customers starting to bridge the gap because cloud bursting is again is a very nice buzzword we see some customers exploring it we don't really see customers doing it at the moment but the customers that are exploring it are exploring uh throwing the compute out to the cloud using the kubernetes cluster and throwing the data to the cloud using the weka cluster so there's and and one last thing because that's another interesting use case weka can be run converged on the same kubernetes cluster so there is no need to have even it's so in essence it's a zero footprint storage you don't need to even add more servers so i don't need to buy a box and connect my cluster to that box i just run it on the same servers and if i want more compute nodes i add more nodes and i'll add more storage by doing that so it's that simple so i was just looking at the website and see that waka was just this was just announced last week a visionary in the gartner mq for what's the mq4 distributed file systems and object storage talk to me talk talk to us about that what does that distinction mean for the company and how does the voice of the customer validate that great so actually this is interesting this is a culmination of a lot of hard work that all of the team did writing the product and all of the customers by adopting the product because it was in order to get to that i know we don't know if anybody is familiar with the criteria but you need to have a large footprint a distinguished footprint worldwide so we worked hard on getting that and we see that and we see that in multiple markets by the way financials we see a massive amounts of aiml projects containerized kubernetes orchestrated so getting to that was a huge achievement you could see other storage devices not being there because not not every storage appliance is is a parallel file system usually i think uh when you look at parallel file systems you you you attribute complexity and i need an army of people to manage it and to tweak it so that's again one of the things that we did and that's why we really think that we're a cool vendor in that magikarp magic quarter right because you it's that simple to manage uh you don't have any uh find you you cannot you don't need to find unity in like a bazillion different ways just install it we work it works you map it to your containers simple so we're here at kubecon a lot of talk about cloud native a lot of projects a lot of integration a lot of community development you've described installing weka into a kubernetes cluster where you know are there are there integrations that are being worked on what are the is there connective tissue between essentially this parallel file system that's spanning you say you have five nodes you have weka running on those five nodes you have a kubernetes cluster spanning those five nodes um what kinds of things are happening in the community maybe that you're supporting or that you're participating in to connect those together so right now you you don't uh we only have the csi plugin we didn't invest in in anything more actually one of the reasons that i'm here is to get to know the community a bit more and to get more involved and we're definitely looking into how more can we help customers utilize kubernetes and and enjoy the worker storage uh do we need to do some sort of integration i'm actually exploring that and i think you'll see some well so we got interesting so we got you at a good time now exactly yeah because you can say with with it with an api approach um you have the you have the connectivity and you're providing this storage layer that provides all the attributes that you described but you are here live living proof green wristband and all showing that the future will be even more interesting voting on the future yeah and and seeing how we can help the community and what can we do together and actually i'm really impressed by the the conference it's been amazing we've been talking about that all week being impressed with the fact that there's we've been hearing between 2 700 and 3 100 people here which is amazing in person of course there's many more that are participating virtually but they've done a great job of these green wristbands by the way we've talked about these a minute ago um this you have a red yellow or green option to to tell others are you comfortable with contact handshakes hugs etc i love that the fact that i am i'm sandwiched by two grains but they've done a great job of making this safe and i hope that this is a message this is a big community um the cncf has 138 000 contributors i hope this is a message that shows that you can do these events we can get together in person again because there's nothing like the hallway track you can't replicate that on video exactly grabbing people in the hallway in the hotel in the lobby talking about their problems seeing what they need what we do it's amazing right so so give us a little bit in our last few minutes here about the go to market what is the the gtm strategy for weka so that's an interesting question so being fully software defined when we started we we thought do we do another me too another storage appliance even though we're storage defined could we just go to market with our own boxes and we actually uh decided to go differently because our market was actually the storage vendors sorry the server vendors we actually decided to go and enable other bare metal environments manufacturers to now create storage solutions so we now have a great partnership with hpe with supermicro with hitachi uh and and more as well with aws because again being software defined we we can run on the cloud we do have massive projects on the clouds some of the we're all familiar with some but i can't mention um so and we we chose that as our go to market because we we are fully software defined we don't need any specific hardware for we just need a server with nvmes or an instance with nvmes and that's it there's no usually when i talk about what we need is as a product i also talk about the list of what we don't need is longer we don't need j bar j buffs servers ups we don't need all of that raid arrays we just need the servers so a lot of the server vendors actually identify that and then when we approach them and say hey this is what we can do on your bare metal on your environment is that valuable of course so so that's mostly our go to market another thing is that we chose to to focus on the markets that we're going after we're not another me too we're not another storage for your home directories even though obviously we are in some cases uh by customers but we're the storage where if you could shrink your wall clock time of your pipeline from two weeks to four hours and we did that's like 84 times faster if you could do that how valuable is that that's what we do that we see that more and more in modern enterprises so when we started doing that people were saying hey so your go to market is only hpc uh no all if you look at ai email life science um financials and the list goes on right modern environments are now being what hpc was a few years ago so there's massive amounts of data so our go to market is to be very targeted toward uh these markets and then we can say that they also uh push us to to other sides of the hey i have a worker so i might put my vmware on it i might put my i'll do my distributed compilation on this it's it's growing organically so that's fun to see awesome tremendous amount of growth i love that you talked about it very clearly simplicity speed and scale i think you did a great job of articulating why waka is not a me too last question are there any upcoming webinars or events or announcements that that folks can go to learn more about weka uh great question um i didn't come with my marketing hat but we we constantly have events and uh we usually what we usually do we we talk about the markets that we go after so for example a while ago we were in bioit so we published some uh life science articles um i need to see what's in the pipeline and definitely share it with you well i know you guys are going to be at re invent we do so hopefully we'll see you re-invent we're very in super computing as well if you'll be there fantastic i see that on your website there um i don't think we're there but we will see you we're a strong believer of of these conferences of these communities of being on the ground talking with people obviously if you can do it we'll do it with zoom but this is prices yeah it is there's nothing like it shimon it's been great to have you on the program thank you so much for giving us an update on weka sharing what you guys are doing how you're helping kubernetes users and what differentiates the technology we appreciate all your insights and your energy too no it's not me it's the product ah i love it for dave nicholson i'm lisa martin coming to you live from los angeles this is kubecon cloudnativecon north america 21 coverage on the cube wrapping up three days of wall-to-wall coverage we thank you for watching we hope you stay well
SUMMARY :
actually are able to scale uh and we did
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
lisa martin | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
84 times | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
los angeles | LOCATION | 0.99+ |
dave nicholson | PERSON | 0.99+ |
hpc | ORGANIZATION | 0.99+ |
supermicro | ORGANIZATION | 0.99+ |
10 gig | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
four hours | QUANTITY | 0.99+ |
138 000 contributors | QUANTITY | 0.99+ |
north america | LOCATION | 0.98+ |
200 gig | QUANTITY | 0.98+ |
3 100 people | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
KubeCon | EVENT | 0.98+ |
two grains | QUANTITY | 0.98+ |
three days | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
one gig | QUANTITY | 0.97+ |
CloudNativeCon | EVENT | 0.97+ |
2 700 | QUANTITY | 0.97+ |
100 gig | QUANTITY | 0.97+ |
cloudnativecon | EVENT | 0.95+ |
100 | QUANTITY | 0.94+ |
zero footprint | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
one last thing | QUANTITY | 0.93+ |
nvmes | TITLE | 0.93+ |
a couple of years | QUANTITY | 0.93+ |
aws | ORGANIZATION | 0.93+ |
five nodes | QUANTITY | 0.92+ |
kubecon | ORGANIZATION | 0.92+ |
weka | ORGANIZATION | 0.92+ |
Shimon Ben David | PERSON | 0.91+ |
shimon | PERSON | 0.9+ |
a minute ago | DATE | 0.89+ |
hpe | ORGANIZATION | 0.89+ |
NA 2021 | EVENT | 0.85+ |
few years ago | DATE | 0.84+ |
one storage | QUANTITY | 0.83+ |
cncf | ORGANIZATION | 0.81+ |
a lot of hard work | QUANTITY | 0.81+ |
wave | EVENT | 0.8+ |
50 | TITLE | 0.8+ |
five | QUANTITY | 0.8+ |
day one | QUANTITY | 0.79+ |
a lot of file system | QUANTITY | 0.76+ |
2020 | DATE | 0.76+ |
a lot of companies | QUANTITY | 0.75+ |
david | PERSON | 0.74+ |
lot of storage environments | QUANTITY | 0.74+ |
mq4 | TITLE | 0.72+ |
three things | QUANTITY | 0.71+ |
a lot of storage | QUANTITY | 0.71+ |
hitachi uh | ORGANIZATION | 0.69+ |
a while ago | DATE | 0.69+ |
gartner | TITLE | 0.69+ |
kubecon | EVENT | 0.67+ |
terabytes | QUANTITY | 0.67+ |
multiple petabytes | QUANTITY | 0.67+ |
three days | DATE | 0.65+ |
day | QUANTITY | 0.62+ |
hours | QUANTITY | 0.61+ |
csi | TITLE | 0.6+ |
petabyte | QUANTITY | 0.57+ |
nodes | TITLE | 0.55+ |
gigabytes | QUANTITY | 0.55+ |
bioit | TITLE | 0.54+ |
intel | ORGANIZATION | 0.52+ |
re | EVENT | 0.51+ |
jade | PERSON | 0.48+ |
waka | ORGANIZATION | 0.47+ |
con | LOCATION | 0.43+ |
fortune | COMMERCIAL_ITEM | 0.43+ |
top 500s | QUANTITY | 0.4+ |
three | QUANTITY | 0.33+ |
21 | QUANTITY | 0.3+ |
Monica Kumar & Tarkan Maner, Nutanix | CUBEconversation
(upbeat music) >> The cloud is evolving. You know, it's no longer a set of remote services somewhere off in the cloud, in the distance. It's expanding. It's moving to on-prem. On-prem workloads are connecting to the cloud. They're spanning clouds in a way that hides the plumbing and simplifies deployment, management, security, and governance. So hybrid multicloud is the next big thing in infrastructure, and at the recent Nutanix .NEXT conference, we got a major dose of that theme, and with me to talk about what we heard at that event, what we learned, why it matters, and what it means to customers are Monica Kumar, who's the senior vice president of marketing and cloud go-to-market at Nutanix, and Tarkan Maner, who's the chief commercial officer at Nutanix. Guys, great to see you again. Welcome to the theCUBE. >> Great to be back here. >> Great to see you, Dave. >> Okay, so you just completed another .NEXT. As an analyst, I like to evaluate the messaging at an event like this, drill into the technical details to try to understand if you're actually investing in the things that you're promoting in your keynotes, and then talk to customers to see how real it is. So with that as a warning, you guys are all in on hybrid multicloud, and I have my takeaways that I'd be happy to share, but, Tarkan, what were your impressions, coming out of the event? >> Look, you had a great entry. Our goal, as Monica is going to outline, too, cloud is not a destination. It's an operating model. Our customers are basically using cloud as a business model, as an operating model. It's not just a bunch of techno mumbo-jumbo, as, kind of, you outlined. We want to make sure we make cloud invisible to the customer so they can focus on what they need to focus on as a business. So as part of that, we want to make sure the workloads, the apps, they can run anywhere the way the customer wants. So in that context, you know, our entire story was bringing customer workloads, use-cases, partner ecosystem with ISVs and cloud providers and service providers and ISPs we're working with like Citrix on end user computing, like Red Hat on cloud native, and also bringing the right products, both in terms of infrastructure capability and management capability for both operators and application developers. So bringing all these pieces together and make it simple for the customer to use the cloud as an operating model. That was the biggest goal here. >> Great, thank you. Monica, anything you'd add in terms of your takeaways? >> Well, I think Tarkan said it right. We are here to make cloud complexity invisible. This was our big event to get thousands of our customers, partners, our supporters together and unveil our product portfolio, which is much more simplified, now. It's a cloud platform. And really have a chance to show them how we are building an ecosystem around it, and really bringing to life the whole notion of hybrid multicloud computing. >> So, Monica, could you just, for our audience, just summarize the big news that came out of .NEXT? >> Yeah, we actually made four different announcements, and most of them were focused around, obviously, our product portfolio. So the first one was around enhancements to our cloud platform to help customers build modern, software-defined data centers to speed their hybrid multicloud deployments while supporting their business-critical applications, and that was really about the next version of our flagship, AOS six, availability. We announced the general availability of that, and key features really included things like built-in virtual networking, disaster recovery enhancements, security enhancements that otherwise would need a lot of specialized hardware, software, and skills are now built into our platform. And, most importantly, all of this functionality being managed through a single interface, right? Which significantly decreases the operational overhead. So that was one announcement. The second announcement was focused around data services and really making it easy for customers to simplify data management, also optimize big data and database workloads. We announced capability that now improves performances of database workloads by 2x, big data workloads by 3x, so lots of great stuff there. We also announced a new service called Nutanix Data Lens, which is a new unstructured data governance service. So, again, I don't want to go into a lot of details here. Maybe we can do it later. That was our second big announcement. The third announcement, which is really around partnerships, and we'll talk more about that, is with Microsoft. We announced the preview of Nutanix Clusters and Azure, and that's really taking our entire flagship Nutanix platform and running it on Azure. And so, now, we are in preview on that one, and we're super excited about that. And then, last but not least, and I know Tarkan is going to go into a lot more detail, is we announced a strategic partnership with Citrix around the whole future of hybrid work. So lots of big news coming out of it. I just gave you a quick summary. There's a lot more around this, as well. >> Okay. Now, I'd like to give you my honest take, if you guys don't mind, and, Tarkan, I'll steal one of your lines. Don't hate me, okay? So the first thing I'm going to say is I think, Nutanix, you have the absolute right vision. There's no question in my mind. But what you're doing is not trivial, and I think it's going to play out. It's going to take a number of years. To actually build an abstraction layer, which is where you're going, as I take it, as a platform that can exploit all the respective cloud native primitives and run virtually any workload in any cloud. And then what you're doing, as I see it, is abstracting that underlying technology complexity and bringing that same experience on-prem, across clouds, and as I say, that's hard. I will say this: the deep dives that I got at the analyst event, it convinced me that you're committed to this vision. You're spending real dollars on focused research and development on this effort, and, very importantly, you're sticking to your true heritage of making this simple. Now, you're not alone. All the non-hyperscalers are going after the multicloud opportunity, which, again, is really challenging, but my assessment is you're ahead of the game. You're certainly focused on your markets, but, from what I've seen, I believe it's one of the best examples of a true hybrid multicloud-- you're on that journey-- that I've seen to date. So I would give you high marks there. And I like the ecosystem-building piece of it. So, Tarkan, you could course-correct anything that I've said, and I'd love for you to pick up on your comments. It takes a village, you know, you're sort of invoking Hillary Clinton, to bring the right solution to customers. So maybe you could talk about some of that, as well. >> Look, actually, you hit all the right points, and I don't hate you for that. I love you for that, as you know. Look, at the end of the day, we started this journey about 10 years ago. The last two years with Monica, with the great executive team, and overall team as a whole, big push to what you just suggested. We're not necessarily, you know, passionate about cloud. Again, it's a business model. We're passionate about customer outcomes, and some of those outcomes sometimes are going to also be on-prem. That's why we focus on this terminology, hybrid multicloud. It is not multicloud, it's not just private cloud or on-prem and non-cloud. We want to make sure customers have the right outcomes. So based on that, whether those are cloud partners or platform partners like HPE, Dell, Supermicro. We just announced a partnership with Supermicro, now, we're selling our software. HPE, we run on GreenLake. Lenovo, we run on TruScale. Big support for Lenovo. Dell's still a great partner to us. On cloud partnerships, as Monica mentioned, obviously Azure. We had a big session with AWS. Lots of new work going on with Red Hat as an ISV partner. Tying that also to IBM Cloud, as we move forward, as Red Hat and IBM Cloud go hand in hand, and also tons of workarounds, as Monica mentioned. So it takes a village. We want to make sure customer outcomes deliver value. So anywhere, for any app, on any infrastructure, any cloud, regardless standards or protocols, we want to make sure we have an open system coverage, not only for operators, but also for application developers, develop those applications securely and for operators, run and manage those applications securely anywhere. So from that perspective, tons of interest, obviously, on the Citrix or the UC side, as Monica mentioned earlier, we also just announced the Red Hat partnership for cloud services. Right before that, next we highlighted that, and we are super excited about those two partnerships. >> Yeah, so, when I talked to some of your product folks and got into the technology a little bit, it's clear to me you're not wrapping your stack in containers and shoving it into the cloud and hosting it like some do. You're actually going much deeper. And, again, that's why it's hard. You could take advantage of those things, but-- So, Monica, you were on the stage at .NEXT with Eric Lockhart of Microsoft. Maybe you can share some details around the focus on Azure and what it means for customers. >> Absolutely. First of all, I'm so grateful that Eric actually flew out to the Bay Area to be live on stage with us. So very super grateful for Eric and Azure partnership there. As I said earlier, we announced the preview of Nutanix Clusters and Azure. It's a big deal. We've been working on it for a while. What this means is that a select few organizations will have an opportunity to get early access and also help shape the roadmap of our offering. And, obviously, we're looking forward to then announcing general availability soon after that. So that's number one. We're already seeing tremendous interest. We have a large number of customers who want to get their hands on early access. We are already working with them to get them set up. The second piece that Eric and I talked about really was, you know, the reason why the work that we're doing together is so important is because we do know that hybrid cloud is the preferred IT model. You know, we've heard that in spades from all different industries' research, by talking to customers, by talking to people like yourselves. However, when customers actually start deploying it, there's lots of issues that come up. There's limited skill sets, resources, and, most importantly, there's a disparity between the on-premises networking security management and the cloud networking security management. And that's what we are focused on, together as partners, is removing that barrier, the friction between on-prem and Azure cloud. So our customers can easily migrate their workloads in Azure cloud, do cloud disaster recovery, create a burst into cloud for elasticity if they need to, or even use Azure as an on-ramp to modernize applications by using the Azure cloud services. So that's one big piece. The second piece is our partnership around Kubernetes and cloud native, and that's something we've already provided to the market. It's GA with Azure and Nutanix cloud platform working together to build Kubernetes-based applications, container-based applications, and run them and manage them. So there's a lot more information on nutanix.com/azure. And I would say, for those of our listeners who want to give it a try and who want their hands on it, we also have a test drive available. You can actually experience the product by going to nutanix.com/azure and taking the test drive. >> Excellent. Now, Tarkan, we saw recently that you announced services. You've got HPE GreenLake, Lenovo, their Azure service, which is called TruScale. We saw you with Keith White at HPE Discover. I was just with Keith White this week, by the way, face to face. Awesome guy. So that's exciting. You got some investments going on there. What can you tell us about those partnerships? >> So, look, as we talked through this a little bit, the HPE relationship is a very critical relationship. One of our fastest growing partnerships. You know, our customers now can run a Nutanix software on any HPE platform. We call it DX, is the platform. But beyond that, now, if the customers want to use HPE service as-a-service, now, Nutanix software, the entire stack, it's not only hybrid multicloud platform, the database capability, EUC capability, storage capability, can run on HPE's service, GreenLake service. Same thing, by the way, same way available on Lenovo. Again, we're doing similar work with Dell and Supermicro, again, giving our customers choice. If they want to go to a public club partner like Azure, AWS, they have that choice. And also, as you know, I know Monica, you're going to talk about this, with our GSI partnerships and new service provider program, we're giving options to customers because, in some other regions, HPE might not be their choice or Azure not be choice, and a local telco might the choice in some country like Japan or India. So we give options and capability to the customers to run Nutanix software anywhere they like. >> I think that's a really important point you're making because, as I see all these infrastructure providers, who are traditionally on-prem players, introduce as-a-service, one of the things I'm looking for is, sure, they've got to have their own services, their own products available, but what other ecosystem partners are they offering? Are they truly giving the customers choice? Because that's, really, that's the hallmark of a cloud provider. You know, if we think about Amazon, you don't always have to use the Amazon product. You can use actually a competitive product, and that's the way it is. They let the customers choose. Of course, they want to sell their own, but, if you innovate fast enough, which, of course, Nutanix is all about innovation, a lot of customers are going to choose you. So that's key to these as-a-service models. So, Monica, Tarkan mentioned the GSIs. What can you tell us about the big partners there? >> Yeah, definitely. Actually, before I talk about GSIs, I do want to make sure our listeners understand we already support AWS in a public cloud, right? So Nutanix totally is available in general, generally available on AWS to use and build a hybrid cloud offering. And the reason I say that is because our philosophy from day one, even on the infrastructure side, has been freedom of choice for our customers and supporting as large a number of platforms and substrates as we can. And that's the notion that we are continuing, here, forward with. So to talk about GSIs a bit more, obviously, when you say one platform, any app, any cloud, any cloud includes on-prem, it includes hyperscalers, it includes the regional service providers, as well. So as an example, TCS is a really great partner of ours. We have a long history of working together with TCS, in global 2000 accounts across many different industries, retail, financial services, energy, and we are really focused, for example, with them, on expanding our joint business around mission critical applications deployment in our customer accounts, and specifically our databases with Nutanix Era, for example. Another great partner for us is HCL. In fact, HCL's solution SKALE DB, we showcased at .NEXT just yesterday. And SKALE DB is a fully managed database service that HCL offers which includes a Nutanix platform, including Nutanix Era, which is our database service, along with HCL services, as well as the hardware/software that customers need to actually run their business applications on it. And then, moving on to service providers, you know, we have great partnerships like with Cyxtera, who, in fact, was the service provider partner of the year. That's the award they just got. And many other service providers, including working with, you know, all of the edge cloud, Equinix. So, I can go on. We have a long list of partnerships, but what I want to say is that these are very important partnerships to us. All the way from, as Tarkan said, OEMs, hyperscalers, ISVs, you know, like Red Hat, Citrix, and, of course, our service provider, GSI partnerships. And then, last but not least, I think, Tarkan, I'd love for you to maybe comment on our channel partnerships as well, right? That's a very important part of our ecosystem. >> No, absolutely. You're absolutely right. Monica. As you suggested, our GSI program is one of the best programs in the industry in number of GSIs we support, new SP program, enterprise solution providers, service provider program, covering telcos and regional service providers, like you suggested, OVH in France, NTT in Japan, Yotta group in India, Cyxtera in the US. We have over 50 new service providers signed up in the last few months since the announcement, but tying all these things, obviously, to our overall channel ecosystem with our distributors and resellers, which is moving very nicely. We have Christian Alvarez, who is running our channel programs globally. And one last piece, Dave, I think this was important point that Monica brought up. Again, give choice to our customers. It's not about cloud by itself. It's outcomes, but cloud is an enabler to get there, especially in a hybrid multicloud fashion. And last point I would add to this is help customers regardless of the stage they're in in their cloud migration. From rehosting to replatforming, repurchasing or refactoring, rearchitecting applications or retaining applications or retiring applications, they will have different needs. And what we're trying to do, with Monica's help, with the entire team: choice. Choice in stage, choice in maturity to migrate to cloud, and choice on platform. >> So I want to close. First of all, I want to give some of my impressions. So we've been watching Nutanix since the early days. I remember vividly standing around the conference call with my colleague at the time, Stu Miniman. The state-of-the-art was converged infrastructure, at the time, bolting together storage, networking, and compute, very hardware centric. And the founding team at Nutanix told us, "We're going to have a software-led version of that." And you popularized, you kind of created the hyperconverged infrastructure market. You created what we called at the time true private cloud, scaled up as a company, and now you're really going after that multicloud, hybrid cloud opportunity. Jerry Chen and Greylock, they just wrote a piece called Castles on the Cloud, and the whole concept was, and I say this all the time, the hyperscalers, last year, just spent a hundred billion dollars on CapEx. That's a gift to companies that can add value on top of that. And that's exactly the strategy that you're taking, so I like it. You've got to move fast, and you are. So, guys, thanks for coming on, but I want you to both-- maybe, Tarkan, you can start, and Monica, you can bring us home. Give us your wrap up, your summary, and any final thoughts. >> All right, look, I'm going to go back to where I started this. Again, I know I go back. This is like a broken record, but it's so important we hear from the customers. Again, cloud is not a destination. It's a business model. We are here to support those outcomes, regardless of platform, regardless of hypervisor, cloud type or app, making sure from legacy apps to cloud native apps, we are there for the customers regardless of their stage in their migration. >> Dave: Right, thank you. Monica? >> Yeah. And I, again, you know, just the whole conversation we've been having is around this but I'll remind everybody that why we started out. Our journey was to make infrastructure invisible. We are now very well poised to helping our customers, making the cloud complexity invisible. So our customers can focus on business outcomes and innovation. And, as you can see, coming out of .NEXT, we've been firing on all cylinders to deliver this differentiated, unified hybrid multicloud platform so our customers can really run any app, anywhere, on any cloud. And with the simplicity that we are known for because, you know, our customers love us. NPS 90 plus seven years in a row. But, again, the guiding principle is simplicity, portability, choice. And, really, our compass is our customers. So that's what we are focused on. >> Well, I love not having to get on planes every Sunday and coming back every Friday, but I do miss going to events like .NEXT, where I meet a lot of those customers. And I, again, we've been following you guys since the early days. I can attest to the customer delight. I've spent a lot of time with them, driven in taxis, hung out at parties, on buses. And so, guys, listen, good luck in the next chapter of Nutanix. We'll be there reporting and really appreciate your time. >> Thank you so much. >> Thank you so much, Dave. >> All right, and thank you for watching, everybody. This is Dave Vellante for theCUBE, and, as always, we'll see you next time. (light music)
SUMMARY :
and at the recent and then talk to customers and also bringing the right products, terms of your takeaways? and really bringing to just summarize the big news So the first one was around enhancements So the first thing I'm going to say is big push to what you just suggested. and got into the technology a little bit, and also help shape the face to face. and a local telco might the choice and that's the way it is. And that's the notion but cloud is an enabler to get there, and the whole concept was, We are here to support those outcomes, Dave: Right, thank you. just the whole conversation in the next chapter of Nutanix. and, as always, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Monica | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Monica Kumar | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tarkan | PERSON | 0.99+ |
Supermicro | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Hillary Clinton | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Eric Lockhart | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Keith White | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Tarkan Maner | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Christian Alvarez | PERSON | 0.99+ |
HCL | ORGANIZATION | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
second | QUANTITY | 0.99+ |
Keith White | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Cyxtera | ORGANIZATION | 0.99+ |
HPE | TITLE | 0.99+ |
3x | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
seven years | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
second announcement | QUANTITY | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
TCS | ORGANIZATION | 0.99+ |
Azure | ORGANIZATION | 0.99+ |
Bay Area | LOCATION | 0.99+ |
two partnerships | QUANTITY | 0.99+ |
Nutanix Clusters | ORGANIZATION | 0.99+ |
UC | ORGANIZATION | 0.98+ |
one announcement | QUANTITY | 0.98+ |
over 50 new service providers | QUANTITY | 0.98+ |
Craig Nunes & Tobias Flitsch, Nebulon | CUBEconversations
(upbeat intro music) >> More than a decade ago, the team at Wikibon coined the term Server SAN. We saw the opportunity to dramatically change the storage infrastructure layer and predicted a major change in technologies that would hit the market. Server SAN had three fundamental attributes. First of all, it was software led. So all the traditionally expensive controller functions like snapshots and clones and de-dupe and replication, compression, encryption, et cetera, they were done in software directly challenging a two to three decade long storage controller paradigm. The second principle was it leveraged and shared storage inside of servers. And the third it enabled any-to-any typology between servers and storage. Now, at the time we defined this coming trend in a relatively narrow sense inside of a data center location, for example, but in the past decade, two additional major trends have emerged. First the software defined data center became the dominant model, thanks to VMware and others. And while this eliminated a lot of overhead, it also exposed another problem. Specifically data centers today allocate probably we estimate around 35% of CPU cores and cycles to managing things like storage and network and security, offloading those functions. This is wasted cores and doing this with traditional general purpose x86 processors is expensive and it's not efficient. This is why we've been reporting so aggressively on ARM's ascendancy into the enterprise. It's not only coming it's here and we're going to talk about that today. The second mega trend is cloud computing. Hyperscale infrastructure has allowed technology companies to put a management and control plane into the cloud and expand beyond our narrow server SAN scope within a single data center and support the management of distributed data at massive scale. And today we're on the cusp of a new era of infrastructure. And one of the startups in this space is Nebulon. Hello everybody, this is Dave Vellante, and welcome to this Cube Conversation where we welcome in two great guests, Craig Nunes, Cube alum, co-founder and COO at Nebulon and Tobias Flitsch who's director of product management at Nebulon. Guys, welcome. Great to see you. >> So good to be here Dave. Feels awesome. >> Soon, face to face. Craig, I'm heading your way. >> I can't wait. >> Craig, you heard my narrative upfront and I'm wondering are those the trends that you guys saw when you, when you started the company, what are the major shifts in the world today that, that caused you and your co-founders to launch Nebulon? >> Yeah, I'll give you sort of the way we think about the world, which I think aligns super right with, with what you're talking about, you know, over the last several years, organizations have had a great deal of experience with public cloud data centers. And I think like any platform or technology that is, you know, gets its use in a variety of ways, you know, a bit of savvy is being developed by organizations on, you know, what do I put where, how do I manage things in the most efficient way possible? And there are, in terms of the types of folks we're focused on in Nebulon's business, we see now kind of three groups of people emerging, and, and we sort of simply coined three terms, the returners, the removers, and the remainers. I'll explain what I mean by each of those, the returners are folks who maybe early on, you know, hit the gas on cloud, moved, you know, everything in, a lot in, and realize that while it's awesome for some things, for other things, it was less optimal. Maybe cost became a factor or visibility into what was going on with their data was a factor, security, service levels, whatever. And they've decided to move some of those workloads back. Returners. There are what I call the removers that are taking workloads from, you know, born in the cloud. On-prem, you know, and this was talked a lot about in Martine's blog that, you know, talked about a lot of the growth companies that built up such a large footprint in the public cloud, that economics were kind of working against them. You can, depending on the knobs you turn, you know, you're probably spending two and a half X, two X, what you might spend if you own your own factory. And you can argue about, you know, where your leverage is in negotiating your pricing with the cloud vendors, but there's a big gap. The last one is, and I think probably the most significant in terms of who we've engaged with is the remainers. And the remainers are, you know, hybrid IT organizations. They've got assets in the cloud and on-prem, they aspire to an operational model that is consistent across everything and, you know, leveraging all the best stuff that they observed in their cloud-based assets. And it's kind of our view that frankly we take from, from this constituency that, when people talk about cloud or cloud first, they're moving to something that is really more an operating model versus a destination or data center choice. And so, we get people on the phone every day, talking about cloud first. And when you kind of dig into what they're after, it's operating model characteristics, not which data center do I put it in, and those, those decisions are separating. And so that, you know, it's really that focus for us is where, we believe we're doing something unique for that group of customers. >> Yeah. Cloud first doesn't doesn't mean cloud only. And of course followers of this program know, we talk a lot about this, this definition of cloud is changing, it's evolving, It's moving to the edge, it's moving to data centers, data centers are moving to the cloud. Cross-cloud, it's that big layer that's expanding. And so I think the definition of cloud, even particularly in customer's minds is evolving. There's no question about it. People, they'll look at what VMware is doing in AWS and say, okay, that's cloud, but they'll also look at things like VMware cloud foundation and say oh yeah, that's cloud too. So to me, the beauty of cloud is in the eye of the customer beholder. So I buy that. Tobias. I wonder if you could talk about how this all translates into product, because you guys start up, you got to sell stuff, you use this term smart infrastructure, what is that? How does this all turn into stuff you can sell? >> Right. Yeah. So let me back up a little bit and talk a little bit about, you know, what we at Nebulon do. So we are a cloud based software company, and we're delivering sort of a new category of smart infrastructure. And if you think about things that you would know from your everyday surroundings, smart infrastructure is really all around us. Think smart home technology like Google Nest as an example. And what this all has in common is that there's a cloud control plane that is managing some IOT end points and smart devices in various locations. And by doing that, customers gain benefits like easy remote management, right? You can manage your thermostat, your temperature from anywhere in the world basically. You don't have to worry about automated software updates anymore, and you can easily automate your home, your infrastructure, through this cloud control plane and translating this idea to the data center, right? This idea is not necessarily new, right? If you look into the networking space with Meraki networks, now Cisco or Mist Systems now Juniper, they've really pioneered efforts in cloud management. So smart network infrastructure, and the key problem that they solved there is, you know, managing these vast amount of access points and switches that are scattered across the data centers across campuses, and, you know, the data center. Now, if you translate that to what Nebulon does, it's really applying this smart infrastructure idea, this methodology to application infrastructure, specifically to compute and storage infrastructure. And that's essentially what we're doing. So with smart infrastructure, basically our offering it at Nebulon, the product, that comes with the benefits of this cloud experience, public cloud operating model, as we've talked about, some of our customers look at the cloud as an operating model rather than a destination, a physical location. And with that, we bring to us this model, this, this experience as operating a model to on-premises application infrastructure, and really what you get with this broad offering from Nebulon and the benefits are really circling it out, you know, four areas, first of all, rapid time to value, right? So application owners think people that are not specialists or experts when it comes to IT infrastructure, but more generalists, they can provision on-premise application infrastructure in less than 10 minutes, right? It can go from, from just bare metal physical racks to the full application stack in less than 10 minutes, so they're up and running a lot quicker. And they can immediately deliver services to their end customers, cloud-like operations, this, this notion of zero touch remote management, which now with the last couple of months with this strange time that we're with COVID is, you know, turnout is becoming more and more relevant really as in remotely administrating and management of infrastructure that scales from just hundreds of nodes to thousands of nodes. It doesn't really matter with behind the scenes software updates, with global AI analytics and insights, and basically overall combined reduce the operational overhead when it comes to on-premises infrastructure by up to 75%, right? The other thing is support for any application, whether it's containerized, virtualized, or even bare metal applications. And the idea here is really consistent leveraging server-based storage that doesn't require any Nebulon-specific software on the server. So you get the full power of your application servers for your applications. Again, as the servers intended. And then the fourth benefit when it comes to smart infrastructure is, is of course doing this all at a lower cost and with better data center density. And that is really comparing it to three-tier architectures where you have your server, your SAN fabric, and then you have an external storage, but also when you compare it with hyper-converged infrastructure software, right, that is consuming resources of the application servers, think CPU, think memory and networking. So basically you get a lot more density with that approach compared to those architectures. >> Okay, I want to dig into some of that differentiation too, but what exactly do I buy from you? Do I buy a software subscription? Is that right? Can you explain that a little bit? >> Right. So basically the way we do this is it's really leveraging two key new innovations, right? So, and you see why I made the bridge to smart home technology, because the approach is civil, right? The one is, you know, the introduction of a cloud control plane that basically manage this on-premise application infrastructure, of course, that is delivered to customers as a service. The second one is, you know, a new infrastructure model that uses IOT endpoint technology, and that is embedded into standard application servers and the storage within this application servers. Let me add a couple of words to that to explain a little bit more, so really at the heart of smart infrastructure, in order to deliver this public cloud experience for any on-prem application is this cloud-based control plane, right? So we've built this, how we recommend our customers to use a public cloud, and that is built, you know, building your software on modern technologies that are vendor-agnostic. So it could essentially run anywhere, whether it is, you know, any public cloud vendor, or if we want to run in our own data centers, when regulatory requirements change, it's massively scalable and responsive, no matter how large the managed infrastructure is. But really the interesting part here, Dave, is that the customer doesn't really have to worry about any of that, it's delivered as a service. So what a customer gets from this cloud control plane is a single API end point, how they get it with a public cloud. They get a web user interface, from which they can manage all of their infrastructure, no matter how many devices, no matter where it is, can be in the data center, can be in an edge location anywhere in the world, they get template-based provisioning much like a marketplace in a public cloud. They get analytics, predictive support services, and super easy automation capabilities. Now the second thing that I mentioned is this server embedded software, the server embedded infrastructure software, and that is running on a PCIE based offload engine. And that is really acting as this managed IOT endpoint within the application server that I managed that I mentioned earlier. And that approach really further converges modern application infrastructure. And it really replaces the software defined storage approach that you'll find in hyper-converged infrastructure software. And that is really by embedding the data services, the storage data service into silicon within the server. Now this offload engine, we call that a services processing unit or SPU in short. And that is really what differentiates us from hyper-converged infrastructure. And it's quite different than a regular accelerator card that you get with some of the hyper-converged infrastructure offerings. And it's different in the sense that the SPU runs basically all of the shared and local data services, and it's not just accelerating individual algorithms, individual functions. And it basically provides all of these services aside the CPU with the boot drive, with data drives. And in essence provides you with this a separate fall domain from the service, so for example, if you reboot your server, the data plan remains intact. You know, it's not impacted for that. >> Okay. So I want to stay on that for just a second, Craig, if I could, I get very clear how you're different from, as Tobias said, the three-tier server SAN fabric, external array, the HCI thing's interesting because in some respects, the HCI has, you know, guys take Nutanix, they talk about cloud and becoming more friendly with developers and API piece, but what's your point of view Craig on how you position relative to say HCI? >> Yeah, absolutely. So everyone gets what three-tier architecture is and was, and HCI software, you know, emerged as an alternative to the three-tier architectures. Everyone I think today understands that data services are, you know, SDS is software hosted in the operating system of each HCI device and consume some amount of CPU, memory, network, whatever. And it's typically constrained to a hypervisor environment, kind of where we're most of that stuff is done. And over time, these platforms have added some monitoring capabilities, predictive analytics, typically provided by the vendor's cloud, right? And as Tobias mentioned, some HCIS vendors have augmented this approach by adding an accelerator to make things like compression and dedupe go faster, right? Think SimpliVity or something like that. The difference that we're talking about here is, the infrastructure software that we deliver as a service is embedded right into server silicon. So it's not sitting in the operating system of choice. And what that means is you get the full power of the server you bought for your workloads. It's not constrained to a hypervisor-only environment, it's OS agnostic. And, you know, it's entirely controlled and administered by the cloud versus with, you know, most HCIS is an on-prem console that manages a cluster or two on-prem. And, you know, think of it from a automation perspective. When you automate something, you've got to set up your playbook kind of cluster by cluster. And depending what versions they're on, APIs are changing, behaviors are changing. So a very different approach at scale. And so again, for us, we're talking about something that gives you a much more efficient infrastructure that is then managed by the cloud and gives you this full kind of operational model you would expect for any kind of cloud-based deployment. >> You know, I got to go back, you guys obviously have some three-part DNA hanging around and you know, of course you remember well, the three-part ASIC, it was kind of famous at the time and it was unique. And I bring that up only because you've mentioned a couple of times the silicon and a lot of people yeah, whatever, but we have been on this, especially, particularly with ARM. And I want to share with the audience, if you follow my breaking analysis, you know this. If you look at the historical curve of Moore's law with x86, it's the doubling of performance every two years, roughly, that comes out to about 40% a year. That's moderated down to about 30% a year now, if you look at the ARM ecosystem and take for instance, apple A15, and the previous series, for example, over the last five years, the performance, when you combine the CPU, GPU, NPU, the accelerators, the DSPs, which by the way, are all customizable. That's growing at 110% a year, and the SOC costs 50 bucks. So my point is that you guys are riding perfect example of doing offloads with a way more efficient architecture. You're now on that curve, that's growing at 100% plus per year. Whereas a lot of the legacy storage is still on that 30% a year curve, and so cheaper, lower power. That's why I love to buy, as you were bringing in the IOT and the smart infrastructure, this is the future of storage and infrastructure. >> Absolutely. And the thing I would emphasize is it's not limited to storage, storage is a big issue, but we're talking about your application infrastructure and you brought up something interesting on the GPU, the SmartNIC of things, and just to kind of level set with everybody there, there's the HCI world, and then there's this SmartNIC DPU world, whatever you want to call it, where it's effectively a network card, it's got that specialized processing onboard and firmware to provide some network security storage services, and think of it as a PCIE card in your server. It connects to an external storage system, so think Nvidia Bluefield 2 connecting to an external NVME storage device. And the interesting thing about that is, you know, storage processing is offloaded from the server. So as we said earlier, good, right, you get the server back to your application, but storage moves out of the server. And it starts to look a little bit like an external storage approach versus a server based approach. And infrastructure management is done by, you know, the server SmartNIC with some monitoring and analytics coming from, you know, your supplier's cloud support service. So complexity creeps back in, if you start to lose that, you know, heavily converged approach. Again, we are taking advantage of storage within the server and, you know, keeping this a real server based approach, but distinguishing ourselves from the HCI approach. Cause there's a real ROI there. And when we talked to folks who are looking at new and different ways, we talk a lot about the cloud and I think we've done a bit of that already, but then at the end of the day, folks are trying to figure out well, okay, but then what do I buy to enable this? And what you buy is your standard server recipe. So think your favorite HPE, Lenovo, Supermicro, whatever, whatever your brand, and it's going to come enabled with this IOT end point within it, so it's really a smart server, if you will, that can then be controlled by our cloud. And so you're effectively buying, you know, from your favorite server vendor, a server option that is this endpoint and a subscription. You don't buy any of this from us, by the way, it's all coming from them. And that's the way we deliver this. >> You know, sorry to get into the plumbing, but this is something we've been on and a facet of it. Is that silicon custom designed or is it pretty much off the shelf, do you guys add any value to it? >> No, there are off the shelf options that can deliver tremendous horsepower on that form factor. And so we take advantage of that to, you know, do what we do in terms of, you know, creating these sort of smart servers with our end point. And so that's where we're at. >> Yeah. Awesome. So guys, what's your sweet spot, you know, why are customers, you know, what are you seeing customers adopting? Maybe some examples you guys can share? >> Yeah, absolutely. So I think Tobias mentioned that because of the architectural approach, there's a lot of flexibility there, you can run virtualized, containerized, bare metal applications. The question is where are folks choosing to get started? And those use cases with our existing customers revolved heavily around virtualization modernization. So they're going back in to their virtualized environment, whether their existing infrastructure is array-based or HCI-based. And they're looking to streamline that, save money, automate more, the usual things. The second area is the distributed edge. You know, the edge is going through tremendous transformation with IOT devices, 5g, and trying to get processing closer to where customers are doing work. And so that distributed edge is a real opportunity because again, it's a more cost-effective, more dense infrastructure. The cloud effectively can manage across all of these sites through a single API. And then the third area is cloud service provider transformation. We do a fair bit of business with, you know, cloud service providers, CTOs, who are looking at trying to build top line growth, trying to create new services and, and drive better bottom line. And so this is really, you know, as much as a revenue opportunity for them as cost saving opportunity. And then the last one is this notion of, you know, bringing the cloud on-prem, we've done a cloud repatriation deal. And I know you've seen a little of that, but maybe not a lot of it. And, you know, I can tell you in our first deals, we've already seen it, so it's out there. Those are the places where people are getting started with us today. >> It's just interesting, you're right. I don't see a ton of it, but if I'm going to repatriate, I don't want to go backwards. I don't want to repatriate to legacy. So it actually does kind of make sense that I repatriate to essentially a component of on-prem cloud that's managed in the cloud, that makes sense to me to buy. But today you're managing from the cloud, you're managing on-prem infrastructure. Maybe you could show us a little leg, share a little roadmap, I mean, where are you guys headed from a product standpoint? >> Right, so I'm not going to go too far on the limb there, but obviously, right. So one of the key benefits of a cloud managed platform is this notion of a single API, right. We talked about the distributed edge where, you know, think of retailer that has, you know, thousands of stores, each store having local infrastructure. And, you know, if you think about the challenges that come with, you know, just administrating those systems, rolling out firmware updates, rolling out updates in general, monitoring those systems, et cetera. So having a single console, a cloud console to administrate all of that infrastructure, obviously, you know, the benefits are easy now. If you think about, if you're thinking about that and spin it further, right? So from the use cases and the types of users that we've see, and Craig talked about them at the very beginning, you can think about this as this is a hybrid world, right. Customers will have data that they'll have in the public cloud. They will have data and applications in their data centers and at the edge, obviously it is our objective to deliver the same experience that they gained from the public cloud on-prem, and eventually, you know, those two things can come closer together. Apart from that, we're constantly improving the data services. And as you mentioned, ARM is, is on a path that is becoming stronger and faster. So obviously we're going to leverage on that and build out our data storage services and become faster. But really the key thing that I'd like to, to mention all the time, and this is related to roadmap, but rather feature delivery, right? So the majority of what we do is in the cloud, our business logic in the cloud, the capabilities, the things that make infrastructure work are delivered in the cloud. And, you know, it's provided as a service. So compared with your Gmail, you know, your cloud services, one day, you don't have a feature, the next day you have a feature, so we're continuously rolling out new capabilities through our cloud. >> And that's about feature acceleration as opposed to technical debt, which is what you get with legacy features, feature creep. >> Absolutely. The other thing I would say too, is a big focus for us now is to help our customers more easily consume this new concept. And we've already got, you know, SDKs for things like Python and PowerShell and some of those things, but we've got, I think, nearly ready, an Ansible SDK. We're trying to help folks better kind of use case by use case, spin this stuff up within their organization, their infrastructure. Because again, part of our objective, we know that IT professionals have, you know, a lot of inertia when they're, you know, moving stuff around in their own data center. And we're aiming to make this, you know, a much simpler, more agile experience to deploy and grow over time. >> We've got to go, but Craig, quick company stats. Am I correct, you've raised just under 20 million. Where are you on funding? What's your head count today? >> I am going to plead the fifth on all of that. >> Oh, okay. Keep it stealth. Staying a little stealthy, I love it. Really excited for you. I love what you're doing. It's really starting to come into focus. And so congratulations. You know, you got a ways to go, but Tobias and Craig, appreciate you coming on The Cube today. And thank you for watching this Cube Conversation. This is Dave Vellante. We'll see you next time. (upbeat outro music)
SUMMARY :
We saw the opportunity to So good to be here Dave. Soon, face to face. hit the gas on cloud, moved, you know, of the customer beholder. that you would know from your and that is built, you know, building your the HCI has, you know, guys take Nutanix, that data services are, you know, So my point is that you guys about that is, you know, or is it pretty much off the of that to, you know, why are customers, you know, And so this is really, you know, the cloud, that makes sense to me to buy. challenges that come with, you know, you get with legacy features, a lot of inertia when they're, you know, Where are you on funding? the fifth on all of that. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tobias Flitsch | PERSON | 0.99+ |
Tobias | PERSON | 0.99+ |
Craig Nunes | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Craig | PERSON | 0.99+ |
Mist Systems | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Supermicro | ORGANIZATION | 0.99+ |
fifth | QUANTITY | 0.99+ |
Nebulon | ORGANIZATION | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
50 bucks | QUANTITY | 0.99+ |
three decade | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
second thing | QUANTITY | 0.99+ |
Meraki | ORGANIZATION | 0.99+ |
Nebulon | PERSON | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
first deals | QUANTITY | 0.99+ |
each store | QUANTITY | 0.99+ |
PowerShell | TITLE | 0.99+ |
third area | QUANTITY | 0.98+ |
Martine | PERSON | 0.98+ |
today | DATE | 0.98+ |
third | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
A15 | COMMERCIAL_ITEM | 0.98+ |
three-tier | QUANTITY | 0.98+ |
Gmail | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
second principle | QUANTITY | 0.98+ |
Bluefield 2 | COMMERCIAL_ITEM | 0.98+ |
110% a year | QUANTITY | 0.98+ |
single console | QUANTITY | 0.98+ |
second area | QUANTITY | 0.98+ |
hundreds of nodes | QUANTITY | 0.98+ |
Moore | PERSON | 0.97+ |
about 40% a year | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
ARM | ORGANIZATION | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
Cube | ORGANIZATION | 0.97+ |
three-part | QUANTITY | 0.97+ |
thousands of stores | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
fourth benefit | QUANTITY | 0.96+ |
two great guests | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
second one | QUANTITY | 0.96+ |
More than a decade ago | DATE | 0.96+ |
about 30% a year | QUANTITY | 0.96+ |
HPE | ORGANIZATION | 0.96+ |
around 35% | QUANTITY | 0.95+ |
thousands of nodes | QUANTITY | 0.95+ |
up to 75% | QUANTITY | 0.95+ |
apple | ORGANIZATION | 0.95+ |
Ken Ringdahl, Veeam | Nutanix .NEXT EU 2019
>>Live from Copenhagen, Denmark. It's the covering Nutanix dot. Next 2019 you by Nutanix. Hello everybody and welcome back to the cubes live coverage of Nutanix dot. Next here in Copenhagen, Denmark. I'm your host, Rebecca Knight, co-hosting alongside Stu Miniman. We're joined by Ken ring doll. He is the vice president global Alliance architecture at V. thank you so much for coming on the cube. It is your sixth time on the cube. So you are an illustrious I know. And then a ring and then a ring for is 10. We've got some sticks. Yeah, here you go. So you're here to talk about the partnership with Nutanix and, and uh, and, and mine. So why don't you tell us a little bit about this partnership and the mine ecosystem and, and how would what you see for the future? >>Yeah, absolutely. So a, you know, Nutanix is a really strategic partner for us. Uh, you know, I'd say we've been partners for quite awhile, probably five, six years. But I would say the, the real sort of tipping point for our partnership was when we committed to go integrate with HV. You know, we had supported vSphere from the beginning. That's, that's what VM was founded on. That's where the foundation of our success, we went and did hyper V and 2011 and we didn't do another hypervisor. We still haven't even done KVM yet, but we saw the value in the Nutanix partnership and we committed to doing HV and delivered that, you know, middle of last year. And we've seen, you know, good pickup on that. But that was really the tipping point when we sort of came in and sort of wrapped our arms around the Nutanix ecosystem. And really, you know, if you want to embrace Nutanix, you're in praise HV cause that's the core, right? That's, that's where they're going. That's their differentiation. Um, and so that was, that was sort of the tipping point. And of course, you know, we can certainly get into mine and everything else we're doing. >>That was, well Ken, first of all, it definitely was, you know, very much noticed in the industry. Uh, you know, Veeam, I remember back when hyper V support was announced and kind of a ripple went through the virtualization, uh, industry on that and Veem stepping forward and supporting HV was a, a real, uh, you know, speaking to not only the partnership but to the maturity of where Nutanix sits out there. Um, we know that the data protection space is quite hot and a question people have had from day one was, well, we'll Nutanix address that directly themselves. Uh, they had Veem rubrics here, you know, other partners are here. So it's how they are addressing that space and mine, uh, that, that is pretty interesting in different from, uh, you know, much of what we see out there. So, uh, bring us inside mine and you know, uh, Nutanix, it wants optionality to be there. So Veem is one of the partners, but also the, you know, uh, likely the most important first one. Uh, there. >>Yeah. So you know, this, there's a lot of similarities between Nutanix and Veem, especially when it comes to the general approach to partners. You know, where we're a software defined, uh, data protection platform. Nutanix, you're right hat an option, Hey, maybe we go build this ourself or we acquire and try to get that revenue, maybe the data protection revenue. And they've decided to partner just like we've decided to partner, you know, for secondary storage and everything else. And that, that really does lead us to mind because you know, a lot of our competitors do ship their software on white box hardware. Uh, some of the emerging startups are doing that and even some of the legacy players are all, you know, whether it's a Supermicro box and Intel box, we've taken a different approach and said, Hey look, you know, we, we, we know what we're good at and we know we want customer choice. >>And even, you know, Dheeraj and others at the keynote today talked about no vendor lock in. We're where we are. We have very similar approaches. And so, you know, we got together over a year ago, year and a half ago and said, Hey, look, you know, as Veem in a, we, we see some customers that are now asking for their data protection. You know, VM was founded on being simple and easy and there's even ways to take that to another level like mine, which is, Hey look, we want to now even simplify the day zero one the zero experience that even into the day one day two ops in terms of an integrated UI and other ways to to bring, you know, the infrastructure together with your data protection. And so it made perfect sense. We got together and it was like boom, a light bulb went off. We got on a whiteboard and we're like, yeah, we can do this. >>Like, you know, it's going to require joint development. And we've sort of made those commitments on both sides and it's been well received now. It's not in the market yet. It will be soon. Um, but the customer feedback has been incredible. We've done this very successful beta, we've got lots and lots of pent up customer demand. So it's like the sales teams are now saying, Hey, when can we, have you been talking about it for a while? When can we have this? Because we have customers ready to buy. So where we're there now that we're ready to bring this to market and excited about the opportunity together. >>So talk a little bit about the, the ins of that partnership. And you were just describing your ethos, which is making everything simple and easy, which is what we're hearing a lot here today. A. Dot. Next. So does that just mean that you attract the same kinds of employees, so then therefore they work well together in the sandbox? I mean, how would you describe the, the cultures coming together in this joint development process? >>Yeah, I think we're, we're similar companies, right? We're a similar size. We're a similar age. We're similar, you know, just, just all around, you know, our, our culture of innovation. So, you know, when we got together it was, it was pretty simple. Now, now doing development as two companies together is always hard. It's never easy. It's even hard to do it when it's one company on your own, right. And get a, get a product to market. Um, so I'd be lying if I said that weren't bumps along the way. There always are. Uh, but you know, we've, we've, we've worked through and we've, you know, we're, we're now, like I said at that point, and I think our, our, just our similarities and our cultures and really we have alignment at the executive level. And that's important, right. To, to get things done because, you know, well, well, you know, all of us that are sort of working on this thing, maybe a level or two, but when executive leadership is aligned, that's when things get done. And we have that between Nutanix and beam. >>Yeah. And Ken did the messaging that I'm hearing from Nutanix now reminds me of what I was hearing a couple of years ago from Veem specifically when you talk to cloud, uh, so a couple of years ago very much, I saw Microsoft up on stage, you know, living with AWS. What are you hearing from your customers and you know, do you see those parallel journeys or will the AHV integration mean that as Nutanix goes along that journey that Newtanics offerings will be able to live in these multiple cloud environments sometime too? >>Yeah. So I think a little bit of both, right? I think, I think the definitely be able to live out there. I mean, you know, you see VM-ware now wrapping their arms around all the hyperscale public cloud vendors. I mean, we heard about XY clusters and that was announced in Anaheim and we saw a demo of it today. And, and, and, you know, our goal is to support those workloads wherever they are. You know, we've, as I said before, we, we sorta made, made our hay and we were founded on attaching the vSphere then hyper V than HV and now AWS and Azure and all these other environments. And really, you know, the roots of it, we, we follow our customers along their journey, right? So, you know, this customers today that, that, you know, maybe smaller, newer companies that go straight to AWS, straight to Azure, they're born in the cloud and they're cloud only. >>You know, they may not be the best fit for Vien maybe a couple of years from now. Uh, they, they may just buy point solutions for the customers, the larger customers that have hybrid environments. That's what we're looking to attack. And you know, whether that's with Nutanix and VMware and those workloads that go, we, we want to make sure we attach here and give our customers the best experience and the ability to burst to the cloud and move around and workload portability, you know, we built features into the product. We've changed our, revolutionized our licensing to make that easier. So, so that's what we're after is is those hybrid customers solving those problems and those challenges they haven't building on our strength, which starts on prem but has moved into the cloud and, and, and spread quite a bit. Yeah. >>What do you see as some of the trends on the horizon? I mean, as you said, you just described your dream customer, which there, there's, there's a few of them out there so you'll be okay. So talk about some of the, the problems that you, that are keeping them up at night and how your solution solves them. >>You know, when it comes to data protection it, you know, everyone can say, Hey, my backups, they were 100% successful. It comes down to restore and reliability. And security, right? And we, you know, we've, we've built a lot into our product to give customers the peace of mind that, Hey, you know, when that call comes at at 11 o'clock at night and I need to recover assistant cause it's down, you know, we need to have hundred percent confidence that that will be there. And oftentimes when, you know, when we're converting customers over from maybe a competitor's product, that's what we hear the most is, is Hey, you know, it's the reliability and the confidence in the infrastructure and that's what we focus on most. And so, you know, we hear that a lot from customers and, and that's really where our focus is. We've got feet, as I said, features built into the product. >>You know, that, that that goes straight after that can, we've watched Newtanics really increased the breadth of what they're offering through through their software. Uh, they've been talking a lot. Files is one of the, you know, strong growth areas. There. Objects is another one that I, I expect would have some interaction with your environment. What are you hearing from customers? Where is Veeam moving with the HP support for some of these other solutions that Nutanix has? Yeah, so, so we've got a very big release coming, you know, in the next call it few months, quarter or so. Um, that is called V 10. You know, and if you guys read Vema on a couple of years ago, we've talked about V 10 and that was a number of features in there. NAS is a big one for us. Um, and it's one that that is probably the most asked for feature that we currently don't have. >>And so having support for files and we've already tested with the beta, you know, we know when we come out with that in a GA form that we're going to be successful with, with files. Uh, object storage is another one that was also part of the V tenet umbrella when we announced it, you know, while ago. Um, and it's been hugely successful for us. It's revolutionized, kind of the way that our customers look at longterm storage is, is, Hey, I can, I can move that to AWSs three or Azure blob or, you know, cloudy in or Swift stack or something else on pram or Nutanix objects. Um, you know, because again, customer choice, but, but we've, you know, we've embraced that because that's where customers are going. She asks, you know, what a customer that, that's, that's where, that's where they're going. They, they, they say, Hey, I want, you know, a lot of them want to get rid of tape, you know, and, and what's the best way to get in this is features of tape in object storage, right? There's object lock and ways to do, you know, uh, write once, read, read many times. So we're, you know, we look at object storage a little bit as, as the next generation of tape. Now it's, you know, it's not exactly that. There's lots of different use cases, but, but for us and for our customers, they're looking, they're looking to, to do the next generation data center. And that includes having object storage is a longterm tier. Uh, you know, for cost reasons, for manageability reasons, you know, of the light. >>Can you talk a little bit about the partner ecosystem and the evolution of it and particularly because the technology industry is, is changing so fast and you, you, you started this conversation by talking about how much your culture is aligned with Nutanix culture. How do you see, with, with these fast changing companies, fast changing technologies, how do you see five, 10 years from now, what will the technology landscape look like? >>Yeah, certainly. I mean obviously the, the push to cloud, that's big, right? Where we're making a lot of, a lot of changes on our site, where, where we're bringing out new products or bringing out new features that specifically take you to cloud. Um, you know, we, we were on with you guys at, at world and, and you know, there was, you know, project Tansu and all this other stuff about Cuba and it was, it was, that was the Coobernetti's conference. Right. And, and, uh, you know, I said earlier, you know, we want to move along at the pace that our customers want to go. So, you know, those, those sort of born in the cloud companies are going straight to Kubernetes, but we're moving along with our customers when it comes to Kubernetes and containers. So, so yeah, we're, we're paying attention to it. Do we have a product that can support every bit of, you know, Kubernetes and containers yet? >>No, but, but we're, you know, there's these things that we're working on and you know, in, in the way that Veem usually develops software, we're not usually first, but we usually come out with something that is rock solid, ready to go, customer ready. We have 355,000 customers we can't afford to and, and, and we're the stewards of their data. Uh, so when we come out with something yet, we may take slightly longer to do it, but you can be sure that it's rock solid, stable, robust, and that's, you know, that's our general approach. And so when you ask, you know, where our customers going, you know, they're definitely going to the cloud, they're going to Kubernetes, they're, you know, all these, all these new technologies, and, and, and, and we sort of like step back and we ask our customers, Hey, are you doing this? You know, what's your plan for this? Is it two years? Is it one year? Is it five years? Um, and we adjust accordingly. >>Yeah. Uh, can anything particular for your European customers that, that, that you can share? >>Yeah, I think, you know, when you think European customers and uniqueness from the rest of the world, I mean, you start with GDPR, right? That that was, you know, a huge thing that went into effect a year ago. Um, and we've, you know, we've, we've done things there, but they're, they're, they're very sensitive to, you know, that and, and being able to, you know, provide that capability for their customers. So, so I'd, I'd put that at the top of the list. I mean, cloud is a big one. You know, I think as we look at the hyperscalers in particular, AWS and Azure, you know, the U S is a big country. You don't need a lot of data centers to cover the country. But now you look at GDPR and some things need to stay in the, in the envelope of a, of a country. And Hey, this, you know, lots of countries in Europe and, and, and so more and more data centers. So the support of those public cloud vendors and the, the sprawl of, of the date and the sprawl of the data centers is, is really important. So having that coverage and being able to provide customer choice is incredibly important to European customers. >>Well, Ken, thank you so much for coming back on the cube. We always have a fun time talking to you. Right. Thank you. Next time I'll be here. Seventh, I'm Rebecca night for Stu Miniman. Stay tuned for more of the cubes live coverage of Nutanix. Dot. Next.
SUMMARY :
and the mine ecosystem and, and how would what you see for the future? And of course, you know, we can certainly get into mine and a real, uh, you know, speaking to not only the partnership but to the maturity of where Nutanix you know, a lot of our competitors do ship their software on white box hardware. And even, you know, Dheeraj and others at the keynote today talked about no vendor lock in. Like, you know, it's going to require joint development. And you were just describing your ethos, To, to get things done because, you know, well, well, you know, all of us that are sort of working on this thing, much, I saw Microsoft up on stage, you know, living with AWS. And really, you know, the roots of it, And you know, whether that's with Nutanix and VMware and those I mean, as you said, you just described your dream customer, And so, you know, we hear that a lot from customers and, and that's really where our focus is. Files is one of the, you know, strong growth areas. And so having support for files and we've already tested with the beta, you know, we know when we come out Can you talk a little bit about the partner ecosystem and the evolution of it and particularly Um, you know, we, we were on with you guys at, No, but, but we're, you know, there's these things that we're working on and you know, that, that you can share? Um, and we've, you know, we've, we've done things there, but they're, they're, they're very sensitive Well, Ken, thank you so much for coming back on the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Ken | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Ken Ringdahl | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
sixth time | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
Veem | ORGANIZATION | 0.99+ |
AWSs | ORGANIZATION | 0.99+ |
Copenhagen, Denmark | LOCATION | 0.99+ |
one year | QUANTITY | 0.99+ |
hundred percent | QUANTITY | 0.99+ |
Dheeraj | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Newtanics | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
today | DATE | 0.99+ |
Anaheim | LOCATION | 0.99+ |
both sides | QUANTITY | 0.98+ |
V 10 | TITLE | 0.98+ |
six years | QUANTITY | 0.98+ |
Vema | TITLE | 0.98+ |
year and a half ago | DATE | 0.98+ |
a year ago | DATE | 0.97+ |
Nutanix dot | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
Swift | TITLE | 0.96+ |
one company | QUANTITY | 0.96+ |
355,000 customers | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Seventh | QUANTITY | 0.95+ |
Intel | ORGANIZATION | 0.95+ |
last year | DATE | 0.94+ |
2019 | DATE | 0.94+ |
Cuba | LOCATION | 0.93+ |
HV | ORGANIZATION | 0.93+ |
couple of years ago | DATE | 0.93+ |
one | QUANTITY | 0.93+ |
Veeam | PERSON | 0.93+ |
vSphere | TITLE | 0.92+ |
European | OTHER | 0.9+ |
first one | QUANTITY | 0.9+ |
Rebecca | PERSON | 0.89+ |
Kubernetes | ORGANIZATION | 0.87+ |
Azure | TITLE | 0.86+ |
vice president | PERSON | 0.81+ |
Stephan Fabel, Canonical | KubeCon 2018
>> Live, from the Seattle, Washington. It's theCUBE, covering KubeCon and CloudNativeCon, North America 2018, brought to you by Red Hat, the Cloud Native Computing Foundation and it's ecosystem partners. >> Welcome back everyone. We're live here in Seattle for theCUBE's exclusive coverage of KubeCon and CloudNativeCon 2018. I'm John Furrier at Stuart Miniman. Our next guest Stephan Fabel, who is the Director of Product Management at Canonical. CUBE alumni, welcome back. Good to see you. >> Thank you. Good to see you too. Thanks for having me. >> You guys are always in the middle of all the action. It's fun to talk to you guys. You have a pulse on the developers, you have pulse on the ecosystem. You've been deep in it for many, many years. Great value. What's hot here, what's the announcement, what's the hard news? Let's get to the hard news out of the way. What's happening? What's happening here at the show for you guys? >> Yeah, we've had a great number of announcements, a great number of threads of work that came into fruition over the last couple of months, and now just last week where we announced hardware reference architectures with our hardware partners, Dell and SuperMicro. We announced ARM support, ARM64 support for Kubernetes. We released our version 1.13 of our Charmed Distribution of Kubernetes, last week And we also released, very proud to release, MicroK8s. Kubernetes in a single snap for your workstation in the latest release 1.13. >> Maybe explain that, 'cause we often talk about scale, but there is big scale, and then we're talking about edge, we're talking about so many of these things. >> That's right. >> That small scale is super important, so- >> It really is, it really is, so, MicroK8s came out of this idea that we want to enable a developer to just quickly standup a Kubernetes cluster on their workstation. And it really came out of this idea to really enable, for example, AIML work clouds, locally from development on the workstation all the way to on-prem and into the public cloud. So that's kind of where this whole thing started. And it ended up being quite obvious to us that if we do this in a snap, then we actually can also tie this into appliances and devices at the edge. Now we're looking at interesting new use cases for Kubernetes at the edge as an actual API end point. So it's a quite nice. >> Stephan talk about ... I want to take a step back. There's kind of dynamics going on in the Kubernetes wave, which by the way is phenomenal, 8000 people here at KubeCon, up from 4000. It's got that hockey stick growth. It's almost like a Moore's Law, if you will, for the events. You guys have been around, so you have a lot of existing big players that have been in the space for a while, doing a lot of work around cloud, multi-cloud, whatever ... That's the new word, but again, you guys have been there. You got like the Cisco's of the world, you guys, big players actively involved, a lot of new entrants coming in. What's your perspective of what's happening here? A lot of people looking at this scratching their head saying: Okay I get Kubernetes, I get the magic. Kubernetes enables a lot of things. What's the impact to me? What's in it for me as an enterprise or a developer? How do you guys see this market place developing? What's really going on here? >> Well I think that the draw to this conference and to technology and all the different vendors et cetera, it's ultimately a multi-cloud experience, right? It is about enabling workload portability and enabling the operator to operate Kubernetes, independently of where that is being deployed. That's actually also the core value proposition of our charmed Kubernetes. The idea that a single operational paradigm allows you to experience, to deploy, lifecycle manage and administer Kubernetes on-prem, as well as any of the public clouds, as well as on other virtual substrates, such as VMware. So ultimately I think the consolidation of application delivery into a single container format, such as Docker and other compatible formats, OCI formats right? That was ultimately a really good thing, 'cause it enabled that portability. Now I think the question is, I know how to deploy my applications in multiple ways, 'cause it's always the same API, right? But how do I actually manage a lot of Kubernetes clusters and a lot of Kubernetes API end points all over the place? >> So break down the hype and reality, because again, a lot of stuff looks good on paper. Love the soundbites of people saying, "Hey, Kubernetes," all this stuff. But people admitting some things that need to be done, work areas. Security is a big concern and people are working on that. Where is the reality? Where does the rubber meet the road when it comes down to, "Okay, I'm an enterprise. What am I buying into with Kubernetes? How do I get there?" We heard Lyft take an approach that's saying, "Look, it solved one problem." Get a beachhead and take the incremental approach. Where's the hype, where's the reality? Separate that for us. >> I think that there is certainly a lot of hype around the technology aspect of Kubernetes. Obviously containerization is invoked. This is how developers choose to engage in application development. We have Microservices architecture. All of those things we're very well aware of and have been around for quite some time and in the conversation. Now looking at container management, container orchestration at scale, it was a natural fit for something like Kubernetes to become quite popular in this space. So from a technology perspective I'm not surprised. I think the rubber meets the road, as always, in two things: In economics and in operations. So if I can roll out more Kubernetes clusters per day, or more containers per day, then my competitor ... I gain a competitive advantage, that the cost per container is ultimately what's going to be the deciding factor here. >> Yeah, Stephan, when I think about developers how do I start with something and then how do I scale it out in the economics of that? I think Canonical has a lot of experience with that to share. What are you seeing ... What's the same, what's different about this ecosystem, CloudNative versus, when we were just talking about Linux or previous ways of infrastructure? >> Well I think that ultimately Kubernetes, in and of itself, is a mechanism to enable developers. It plays one part in the whole software development lifecycle. It accelerates a certain part. Now it's on us, distributors of Kubernetes, to ensure that all the other portions of this whole lifecycle and ecosystem around Kubernetes, where do I deploy it? How do I lifecycle manage it? If there's a security breach like last Monday, what happens to my existing stack and how does that go down? That acceleration is not solved by Kubernetes, it's solved for Kubernetes. >> Your software lives in lots and lots of environments. Maybe you can help clarify for people trying to understand how Kubernetes fits, and when you're playing with the public cloud, your Kubernetes versus their Kubernetes. The distinction I think is, there's a lot of nuance there that people may need help with. >> That's true, yeah. So I think that, first of all, we always distance ourself from the notion of having our Kubernetes. I think we have a distribution of Kubernetes. I think there is conformance, tests that are in place that they're in place for a reason. I think it is the right approach, and we won't install a fourth version of Kubernetes anytime soon. Certainly, that is one of the principles we adhere to. What is different about our distribution of Kubernetes is the operational tooling and the ability to really cookie-cutter out Kubernetes clusters that feel identical, even though they're distributed and spread across multiple different substrates. So I think that is really the fundamental difference of our Kubernetes distribution versus others that are out there on the market. >> The role of developers now, 'cause obviously you're seeing a lot of different personas emerging in this world. I'm just going to lay them out there and I want to get your reaction. The classic application developer, the ones who are sitting there writing code inside a company. It could be a consumer company like Lyft or an enterprise company that needs ... They're rebuilding inside, so it's clear that CIOs or enterprises, CXOs or whatever the title is, they're bringing more software in-house, bringing that competitive advantage under application development. You have the IT pro expert, practitioner kind of role, classic IT, and then you got the opensource community vibe, this show. So you got these three things inter-playing with each other, this show, to me feels a lot like an opensource show, which it is, but it also feels a lot like an IT show. >> Which it also is. >> It also is, and it feels like an app development show, which it also is. So, opportunity, challenge, is this a marketplace condition? What's you thoughts on these kind of personas? >> Well I think it's really a question of how far are you willing to go in your implementation of devops cultural change, right? If you look at that notion of devops and that movement that has really taken ahold in people's minds and hearts over the last couple of years, we're still far off in a lot of ways and a lot of places, right? Even the places who are saying they're doing devops, they're still quite early, if at all, on that adoption curve. I think bringing operators, developers and IT professionals together in a single show is a great way for the community and for the market to actually engage in a larger devops conversation, without the constraint of the individual enterprise that those teams find themselves in. If you can just talk about how you should do something better and how would that work, and there is other kinds of personas and roles at the same table, it is much better that you have the conversation without the constraint of like a deadline or a milestone, or some outage somewhere. Something is always going on. Being able to just have that conversation around a technology and really say, "Hey, this is going to be the one, the vehicle that we use to solve this problem and further that conversation," I think it's extremely powerful. >> Yeah, and we always talk about who's winning and who's losing. It's what media companies do. We do it on theCUBE, we debate it. At the end of the day we always like ... There's no magic quadrant for this kind of market, but the scoreboard can be customers. Amazon's got over 5000 reputable customers. I don't know how many CNCF has. It's probably a handful, not 5000. The customer implications are really where this is going. Multi-cloud equals choice. What's your conversations like with customers? What do you see on the customer landscape in terms of appetite, IQ, or progress for devops? We were talking, not everyone's on server lists yet and that's so obvious that's going to be a big thing. Enterprises are hot right now and they want the tech. Seeing the cloud growth, where's your customer-base? What are those conversations like? Where are they in the adoption of CloudNative? >> It's an extremely interesting question actually, because it really depends on whether they started with PaaS or not. If they ever had a PaaS strategy then they're mostly disillusioned. They came out, they thought it was going to solve a huge problem for them and save them a lot of money, and it turns out that developers want more flexibility than any PaaS approach really was able to offer them. So ultimately they're saying, "You know what, let's go back to basics." I'll just give you a Kubernetes API end point. You already know how to deal with everything else beyond that, and actually you're not cookie-cuttering out post ReSQueL- >> Kubernetes is a reset to PaaS. >> It really does. It kind of disrupted that whole space, and took a step back. >> All right, Stephan, how about Serverless. So a lot of discussion about Knative here. We've been teasing out where that fits compared to functions from AWS and Azure. What's the canonical take on this? What are you hearing from your customers? >> So Serverless is one of those ... Well it's certainly a hot technology and a technology of interest to our customers, but we have longstanding partnerships with Galactic Fog and others in place around Serverless. I haven't seen real production deployments of that yet, and frankly it's probably going to take a little bit longer before that materializes. I do think that there's a lot of efforts right now in containerization. Lots of folks are at that point where they are ready to, and are already running containerized workloads. I think they're busy now implementing Kubernetes. Once they have done that, I think they'll think a little bit more about Serverless. >> One of the things that interest me about this ecosystem is the rise of Kubernetes, the rise of choice, the rise of a lot of tools, a lot of services, trying to fend off the tsunami wave that's hit the beach out of Amazon. I've always said in theCUBE that that's ... They're going to take as much inland territory on this tsunami unless someone puts up a sea wall. I think this is this community here. The question is, is that ... And I want to get your expert opinion on this, because the behemoths, the big guys are getting richer. The innovation's coming from them, they have scale. You mentioned that as a key point in the value of Kubernetes, is scale, as one of those players, I would consider in the big size, not like a behemoth like an Amazon, you got a unique position. How can the industry move forward with disruption and innovation, with the big guys dominating? What has to happen? Is there going to change the size of certain TAMs? Is there going to be new service providers emerging? Something's got to give, either the big guys get richer at the expense of the little guys, or market expands with new categories. How do you guys look at that? Developers are out there, so is it promising look to new categories, but your thoughts. >> I think it's ... So a technology perspective certainly would be, there could be a disruptive technology that comes in and just eats their lunch, which I don't believe is going to happen, but I think it might actually be a more of a market functionality actually. If it goes down to the economics, and as they start to compete there will be a limit to the race to the bottom. So if I go in on an economical advantage point as a public cloud, then I can only take that so far. Now, I can still take it a lot further, but there's going to be a limit to that ultimately. So, I would say that all of the public clouds, we see that increasingly happening, are starting to differentiate. So they're saying, "Come to me for IML." "Come to me for a rich service catalog." "Come to me for workload portability," or something like that, right? And we'll se more differentiation as time goes on. I think that will develop in a little bit of a bubble, to the point where actually other players who are not watching, for example, Chinese clouds, right? Very large, very influential, very rich in services, they can come in and disrupt their market in a totally different way than a technology ever could. >> So key point you mentioned earlier, I want to pivot on that and get to the AI conversation, but scale is a competitive advantage. We've seen that on theCUBE, we see it in the marketplace. Kubernetese by itself is great but at scale it gets better, got nobs and policy. AI is a great example of where a dormant computer science concept that has not yet been unleashed ... Well, it gets unleashed by cloud. Now that's proliferating. AI, what else is out there? How do you see this trend around just large-scale Kubernetes, AI and machine learning coming on around the corner? That's going to be unique, and is new. So you mentioned the Chinese cloud could be a developer here. It's a lever. >> Absolutely, we've been involved with kubeflow since the early days. Early days, it's barely a year, so what early days? It's a year old. >> It's yesterday. >> So a year a ago we started working with kubeflow, and we published one of the first tutorials of how to actually get that up and running and started on Ubuntu, and with our distribution of Kubernetes, and it has since been a focal point of our distribution. We do a couple of things with kubeflow. So the first thing, something that we can bring as a unique value preposition is, because we're the operating system for almost all GKE, all of AKS, all EKS, such a strong standing as an operating system, and have strong partnerships with folks like NVIDIA. It was kind of one of the big milestones that we tried to achieve and we've since completed, actually as another announcement since last week, is the full automatic deployment of GPU enablement on Kubernetes clusters, and have that identical experience happen across the public clouds. So, GPGPU enablement on Kubernetes, as one of the key enablers for projects like kubeflow, which gives you machine learning stacks on demand, right? And then a parallel, we've been working with kubeflow in the community, very active, formed a steering committee to really get the industry perspective into the needs of kubeflow as a community and work with everybody else in that community to make sure that kubeflow releases on time, and hopefully soon, and a 1.0, which is due this summer, but right now they're focused on 0.4. That's a key area of innovation though, opportunity. >> Oh, absolutely. >> I see Amazon's certainly promoting that. What else is new? I've got one last question for you. What's next for you guys? Get a quick plugin for Canonical. What's coming around the corner, what's up? >> We're definitely happy to continue to work on GPGPU enablement. I think that is one of the key aspects that needs to stay ... That we need to stay on top of. We're looking at Kubernates across many different use cases now, especially with our IoT, open to core operating system, which we'll release shortly, and here actually having new use cases for AIML inference. For example, out at the edge looking at drones, robots, self-driving cars, et cetera. We're working with a bunch of different industry partners as well. So increased focus on the devices side of the house can be expected in 2019. >> And that's key these data, in a way that's really relevant. >> Absolutely. >> All right, Stephan, thanks for coming on theCUBE. I appreciate it, Canonical's. Great insight here, bringing in more commentary to the conversation here at KubeCon, CoudNativeCon. Large-scale deployments as a competitive advantage. Kubernetes really does well there: Data, machine learning, AI, all a part of the value and above and below Kubernatese. We're seeing a lot of great advances. CUBE coverage here in Seattle. We'll be back with more after this short break. (digital music)
SUMMARY :
North America 2018, brought to you by Red Hat, Good to see you. Good to see you too. You guys are always in the middle of all the action. in the latest release 1.13. Maybe explain that, 'cause we often talk about scale, and into the public cloud. What's the impact to me? and enabling the operator to operate Kubernetes, that need to be done, work areas. I gain a competitive advantage, that the cost per container in the economics of that? in and of itself, is a mechanism to enable developers. that people may need help with. Certainly, that is one of the principles we adhere to. You have the IT pro expert, practitioner kind of role, What's you thoughts on these kind of personas? and really say, "Hey, this is going to be the one, At the end of the day we always like ... You already know how to deal It kind of disrupted that whole space, and took a step back. What's the canonical take on this? of interest to our customers, One of the things that interest me about this ecosystem and as they start to compete there will be a limit around the corner? since the early days. in that community to make sure What's coming around the corner, what's up? So increased focus on the devices side of the house in a way that's really relevant. AI, all a part of the value and above and below Kubernatese.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephan | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Stephan Fabel | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
SuperMicro | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
8000 people | QUANTITY | 0.99+ |
last Monday | DATE | 0.99+ |
one part | QUANTITY | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
Serverless | ORGANIZATION | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a year | QUANTITY | 0.98+ |
Seattle, Washington | LOCATION | 0.98+ |
Linux | TITLE | 0.97+ |
a year a ago | DATE | 0.97+ |
first thing | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
first tutorials | QUANTITY | 0.96+ |
CloudNativeCon 2018 | EVENT | 0.96+ |
Ubuntu | TITLE | 0.96+ |
three | QUANTITY | 0.96+ |
Chinese | OTHER | 0.96+ |
One | QUANTITY | 0.95+ |
one problem | QUANTITY | 0.95+ |
wave | EVENT | 0.95+ |
kubeflow | TITLE | 0.95+ |
single show | QUANTITY | 0.94+ |
5000 | QUANTITY | 0.94+ |
last couple of months | DATE | 0.94+ |
CUBE | ORGANIZATION | 0.93+ |
AKS | ORGANIZATION | 0.93+ |
fourth version | QUANTITY | 0.92+ |
Kubernetese | TITLE | 0.92+ |
one last question | QUANTITY | 0.92+ |
this summer | DATE | 0.92+ |
4000 | QUANTITY | 0.91+ |
CNCF | ORGANIZATION | 0.91+ |
MicroK8s | ORGANIZATION | 0.91+ |
KubeCon 2018 | EVENT | 0.91+ |
single | QUANTITY | 0.87+ |
ARM | ORGANIZATION | 0.87+ |
last couple of years | DATE | 0.86+ |
first | QUANTITY | 0.85+ |
single container | QUANTITY | 0.85+ |
North America 2018 | EVENT | 0.84+ |
CoudNativeCon | ORGANIZATION | 0.83+ |
Sunil Potti, Nutanix | Nutanix .NEXT EU 2018
>> Live from London, England, it's The Cube covering .NEXT conference Europe 2018. Brought to you by Nutanix. >> Welcome back to London, England. This is The Cube's coverage of Nutanix .NEXT 2018. 3,500 people gathered to listen to Sunil Potti. >> Thanks, Stu. >> For the keynote this morning, Sunil's the chief product and development officer with Nutanix. Glad we moved things around, Sunil, 'cause we know events, lots of things move, keynotes sometimes go long, but happy to have you back on the program. >> No, likewise, anytime. >> All right, so, I've been to a few of these and one of the things I hope you walk us through a little bit. So Nutanix, simplicity is always at its core. I have to say, it's taken me two or three times hearing the new, the broad portfolio, the spectrum, and then I've got the core, I've got essentials, I've got enterprise. I think it's starting to sink in for me, but it'll probably take people a little bit of time, so maybe let's start there. >> I mean, I think one of the biggest things that happened with mechanics is that we went from a few products just twelve months ago to over ten products within the span of a year. And both internally as well as externally, while the product values are obviously obvious, so it's more the consumption within our own sales teams, channel teams, as well as our customer base, needed to be codified into something that could be a journey of adoption. So we took it customer inwards, in about a journey that a customer goes through in adopting services in a world of multi-cloud, and before that, before you get to multi-cloud, you have to build a private cloud that is genuine, as we know. And before we do that, we have to re-platform your data center using HCI, so that's really if you work backwards to that, you start with core, which is your HCI platform for modernizing your data center and then you expand to a cloud platform for every workload, and then you can be in a position to actually leverage your multi-cloud services. >> Yeah, and I like that. I mean, start with the customer first, is where you have and I mean the challenge is, you know, every customer is a little bit different. You know, one of the biggest critiques of, you know, you say, okay, what is a private cloud? because they tend to be snowflakes. Every one's a little bit different and we have a little bit of trouble understanding where it is, or did it melt all over the floor. So give us a little bit of insight into that and help us through those stages, the dirty, the crawl-walk-run. >> Yeah, I think the biggest thing everyone has to understand here is that these are not discrete moving parts. Core is obviously your starting point of leveraging computer storage in a software defined way. The way that Amazon launched with EC2 and S3, right. But then, every service that you consume on top of public cloud still leverages computer storage. So in that sense, essentials is a bunch of additional services such as self-service, files, and so forth, but you still need the core to build on essential, to build a private cloud And then from there onwards, you can choose other services, but you're still leveraging the core constructs. So in that sense, I think, both architecturally as well as from a product perspective, as well as architecturally from a packaging perspective, that's why they're synergistic in the way that things have rolled out. >> Okay, so looking at that portfolio. A lot of the customers I work with now, they don't start out in a data center, they've already moved past that, right? So they are leveraging a partner, the public cloud, they might not even be running virtual machines at all anymore. How does that fit into your portfolio? >> Yeah, I mean, increasingly what we are realizing, and you know, we've done this over the last couple of years, is for example, with Calm, you can only use Calm to manage your public clouds without even managing your private cloud of Nutanix. Increasingly with every new service that we're building out, we're doing it so that people don't have to pay the strategy tax off the stack. It needs to be done by a desire of I want to do it versus I need to do it. So, with Frame, you can get going on AWS in any region in an instant or Azure. You don't need to use any Nutanix software. Same thing with Epoch, with Beam. So I think as a company, what we're essentially all about is about saying let us give you a cloud, service-like experience, maybe workload-centric. If it is desktops and so forth. Or if you are going to be at some point reaching a stage where you have to re-platform your data center to look like a public cloud, then we have the core, try and call it platform itself that'll help you get there as well. >> So, looking at re-platforming that data center. If I were to do that now for a customer I wouldn't be looking at virtual machines, storage, networking, I'd be looking at containers or serverless or you know, the new stuff. Again, what is Nutanix's answer to that? >> Yeah, I mean, I think what we've found is that there's quite a bit of an option, obviously, of cloud-native ads, but when it comes to mainstream budget allocation, it's still a relative silo in terms of mainstream enterprise consumption. So what we're finding out is that if you could leverage your well-known cloud platform to not create another silo for Kubernetes, don't create another silo for Edge or whatever the new use-cases are, but treat them as an extension of your core platform. At least from a manageability perspective and an operations perspective, then the chances of you adopting or your enterprise adopting these new technologies becomes higher. So, for example, in Calm, we have this pseudonym called Kalm with a K, right. Which essentially allows Kubernetes containers to run natively inside a Calm blueprint, but coexist with your databases inside of EM because that's how we see the next-generation enterprise apps morphing, right. Nobody's going to rewrite my whole app. They're going to maybe start with the web tier and the app tier as containers, but my database tier, my message queue tier, is going to be as VMs. So, how does Calm help you abstract the combination of containers and VMs into a common blueprint is what we believe is the first step towards what we call a hybrid app. And when you get to hybrid apps, is when you can actually then get to eventually all of your time to native cloud apps. >> You know, one of the questions I was hearing from customers is, they were looking for some clarity as to the hybrid environments. You know, the last couple of shows, there was a big presence of Google at the show and while I didn't see Google here on the show floor, I know there was an update from kind of, GCP and AHV. Is Google less strategic now, or is it just taking a while to, you know, incubate? How do you feel about that? >> So the way that you'll see us evolve as we navigate the cloud partnerships is to actually find the sweet spot of product-market fit, with respect to where the product is ready and where the market really wants that. And some of it is going to be us doing, you know, a partnership by intent first and then as we execute, we try to land it with honest products. So, where we started off with Google, as you guys know, is to actually leverage the cloud platform side, core locator with Google data centers and then what we we've evolved to is the fact that our data centers can quote-unquote integrate with their data centers to have a common management interface, a common security interface and all, but we can still run as core-located ones. Where the real integration that has taken some time for us to get to is the fact that, look, in addition to Calm, in addition to GKE kind of things, is rather than run as some kind of power sucking alien on top of some Google hardware, true integration comes with us actually innovating on a stack that lands AH3 natively inside GCP and that's where nested virtualization comes in and we have to take that crawl-walk-run approach there because we didn't want to expose it to public customers what we didn't consume internally. So what we have with the new offering that now is called Test Drive is, essentially that. We've proven that AH3 can run a nested virtualization mode on GCP natively, you can core locate with the rest of GCP services, and we use it currently in our R&D environment for running thousands of nodes for pretty much everyday testing on a daily basis, right. And so, once customer interview expose that now as an environment for our end customers to actually test-drive Nutanix as a fully compatible stack though, on purpose, so you have Prism Central, the full CDP stack and so forth, then as that gets hardened over a period of time, we expose that into production and so forth. >> So there's one category of cloud I haven't heard yet, and that's the service providers. So Nutanix used to be a really good partner for service providers, you know, enabling them to deliver services locally to local geography, stuff like that, so what's the sense of Nutanix regarding these service providers currently? >> Yeah, I think that frankly, that's probably a 2019 material change to our roadmap. It's your, the analogy that I have is that when we first launched our operating system, we fist had to do it with an opinionated stack using Supermicro. Most importantly, from an end-customer perspective, they got a single throat to choke, but also equally importantly, it kept the engineering team honest because we knew what it means to do one pick-up page for the full stack. Similarly, when we launched Xi, we needed to make sure we knew what SREs do, right. That scale, and so that's why we started with our version of SMC on, you know, as you guys know with Additional Reality as well as partners like Xterra. But very soon you're going to see is, once we have cleared that opinionated stack, software-wise we're able to leverage it, just like we went from Supermicro to Dell and Lenovo and seven other partners, you're going to see us create a Xi partner network. Which essentially allows us to federate Xi as an OS into the service providers. And that's more a 2019 plus timeframe. >> Yeah, speaking along those lines, the keynote this morning, Karbon with a k talked about Kubernetti's. Talk about that, that's the substrate for Nutanix's push toward cloud natives, so-- >> Yeah, I mean, I think you're going to hear that in the day two keynote as well, is basically, customer's want, as I said, an operating system for containers that is based on well-known APIs like Kube Cattle from Kubernetes and all that, but at the same time, it is curated to support all of the enterprise services such as volumes, storage, security policies from Flow, and you know, the operational policies of containers shouldn't be any different from Vms. So think about it as the developers still a Kubernetes-like interface, they can still port their containers from Neutanix to any other environment, but from an IT ops side, it looks like Kubernetes, containers, and VMs are co-residing as a first-class option. >> Yeah, I feel like there had been a misperception about what Kubernetes is and how it fits, you know. My take has been, it's part of the platform so there's not going to be a battle for a distribution of Kubernetes because I'm going to choose a platform and it should have Kubernetes and it should be compatible with other Kubernetes out there. >> Yeah, I mean, it's going to be like a feature of Linux. See, in that sense, there's lots of Linux distros but the core capabilities of Linux are the same, right. So in that sense, Kubernetes is going to become a feature of Linux, or the cloud operating system, so that those least-common denominator features are going to be there in every cloud OS. >> Alright, so Kubernetes not differentiating just expand the platform >> Enabling >> Enabling peace. So, tell us what is differentiating today? You know, what are the areas where Nutanix stands alone as different from some of the other platform providers of today? >> I think that, I mean obviously, whatever we do, we are trying to do it thoughtfully from the operational, you know, simplicity as a first-class citizen. Like how many new screens do we add when we use new features? A simple example of that is when we did micro-segmentation. The part was to make sure you could go from choosing ten VMs to grouping them and putting a policy as soon as possible as little friction of adopting a new product. So, we didn't have to "virtualize" the network, you didn't need to have VX LANs to actually micro-segment, just like in public cloud, right. So I think we're taking the same thing into services up the stack. A good one to talk about is Error. Which is essentially looking at databases as the next complex beast of operational complexity, besides. Especially, Oracle Rack. And it's easier to manage postcrest and so forth, but what if you could simplify not just the open source management, but also the database side of it? So I would say that Error would be a good example of a strategic value proposition or what does it mean to create a one plus one equals three value proposition to database administrators? Just like we did that for VIR vetted administrators, we're now going after DBS. >> Alright, well, Sunil thank you so much. Wish we had another hour to go through it, but give you the final word, as people leave London this year, you know, what should they be taking away when they think about Nutanix? >> I think the platform continues to evolve, but the key takeaway is that it's a platform company. Not a product company. And with that comes the burden, as well as the promise of being an iconic company for the next, hopefully, decade or so. All right, thanks a lot. >> Well, it's been a pleasure to watch the continued progress, always a pleasure to chat. >> Thank you >> All right, for you Piskar, I'm Stu Miniman, back with more coverage here from Nutanix's .NEXT 2018 in London, England. Thanks for watching the CUBE. (light electronic music)
SUMMARY :
Brought to you by Nutanix. 3,500 people gathered to listen to Sunil Potti. but happy to have you back on the program. I think it's starting to sink in for me, and then you expand to a cloud platform for every workload, and I mean the challenge is, you know, and so forth, but you still need the core A lot of the customers I work with now, So, with Frame, you can get going on AWS in any region or serverless or you know, the new stuff. They're going to maybe start with the web tier or is it just taking a while to, you know, incubate? And some of it is going to be us doing, you know, for service providers, you know, enabling them with our version of SMC on, you know, the keynote this morning, but at the same time, it is curated to support all about what Kubernetes is and how it fits, you know. Yeah, I mean, it's going to be like a feature of Linux. of the other platform providers of today? from the operational, you know, simplicity as people leave London this year, you know, I think the platform continues to evolve, to watch the continued progress, always a pleasure to chat. All right, for you Piskar, I'm Stu Miniman,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lenovo | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Piskar | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Sunil Potti | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Supermicro | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
London, England | LOCATION | 0.99+ |
Epoch | ORGANIZATION | 0.99+ |
Xterra | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sunil | PERSON | 0.99+ |
Beam | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
3,500 people | QUANTITY | 0.99+ |
twelve months ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
ten VMs | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
first step | QUANTITY | 0.98+ |
three times | QUANTITY | 0.98+ |
Kube Cattle | TITLE | 0.98+ |
S3 | TITLE | 0.98+ |
DBS | ORGANIZATION | 0.98+ |
one category | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
thousands of nodes | QUANTITY | 0.96+ |
seven other partners | QUANTITY | 0.96+ |
Stu | PERSON | 0.95+ |
2018 | EVENT | 0.95+ |
today | DATE | 0.95+ |
Edge | TITLE | 0.94+ |
SMC | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.94+ |
a year | QUANTITY | 0.92+ |
single throat | QUANTITY | 0.91+ |
Oracle | ORGANIZATION | 0.91+ |
The Cube | ORGANIZATION | 0.91+ |
SREs | ORGANIZATION | 0.89+ |
this morning | DATE | 0.89+ |
Sunil | ORGANIZATION | 0.89+ |
Kubernetti | ORGANIZATION | 0.86+ |
couple | QUANTITY | 0.85+ |
EU | EVENT | 0.84+ |
Frame | ORGANIZATION | 0.83+ |
Karbon | PERSON | 0.82+ |
Binny Gill, Nutanix & Rajiv Mirani, Nutanix | Nutanix .NEXT 2018
>> Announcer: Live, from New Orleans, Louisiana, it's theCUBE, covering .NEXT Conference 2018, brought to you by Nutanix. >> Welcome back to theCUBE here in New Orleans, Louisiana. I'm Stu Miniman, with my cohost Keith Townsend, who is the CTO advisor, and this is the CTO segment. Happy to welcome back to the program, we have Binny Gill and Rajiv Mirani. Both of them are CTOs. Binny, you've got cloud services, and Rajiv, you have cloud platforms. Let's start there, when we talk about, you know, there was a survey when you registered for the event and said, what do you think of Nutanix as? Am I your server vendor, am I your HCI vendor, am I your cloud vendor, am I your mega, uber platform of everything? You've got platforms and services, help us understand a little bit how this fits and how you look at the portfolio, and we'll arm wrestle if you guys can't agree. >> Rajiv: That sounds good. >> Binny: Yeah, go ahead. >> You want to go ahead? So both of us obviously work very closely together, but broadly speaking, I look after the core stack, the storage, networking, hypervisor, including Prism, and then Binny looks more at the services we're building on top, Era, Calm, things like that, so Binny, can you explain that a bit? >> Given the breadth of the ambition that we have, right, I mean, it's good to focus on the two layers separately in some sense, build a platform that is capable of hosting a whole bunch of services. As you can see in what Amazon and others have evolved, they've spent a lot of time building platform, and if you think about it, even Nutanix, for the last seven, eight years, has done a really good job. And once you have a solid foundation, and building cloud requires some new capabilities as well, as Rajiv has said, networking and security on top, now you can start building services, and services themselves have a stack, right? Because there will be higher-level services that use some lower-level services and this. So that's, you know, that's a long journey ahead of us. >> Yeah, I mean, that's a great point, 'cause every time, it seems like we have, you know, oh, this next-generation thing, I'm not going to have to worry about the underlying thing. Virtualization's going to totally abstract it. We've spent a decade fixing the storage and networking challenges there. Containerization, once again, it's like the application done there. Serverless, of course, will take care of all this, but you know, everything underneath it, it still needs work. How do you balance and give us some of that, you know, what's the glue versus abstracting and going to developers? Maybe let's start with platform. >> Well, the platform's always going to be there, right, and as we look at things like containers, that's actually where things get messy. How do containers work with storage, is one of the bigger issues right now with Kubernetes and other frameworks. So we have to start with a platform, we build on top of that and hopefully abstract enough that, you know, the services themselves don't have to deal with the messiness of the platform. >> Yeah, if you look at how technology is evolving, the more things change, the more they remain the same. The platform used to be Linux, Windows, I mean, that's the operating system on which I build my applications, right? Now, the new platform is cloud. AWS is a platform, is an OS, and Azure is one OS, and how do you build applications that can run on these new, next-generation platforms? But the kind of problems to solve are still the same. I want to snapshot my application, back it up, I want to move my application one place to the other, I want to scale it out, scale it in. So the problems are identical to what we had, but it's just that solving it with the new tools that we have, Kubernetes, containers, and so on. >> Yeah, and sometimes birds just fly right through our studio. >> Yeah, I mean, we worry about bugs, and now we have birds flying in. >> So, Rajiv, talk to us about, you basically have two different types of cloud clusters. You have to serve Binny's organization, you also have to serve your external clients. Storage, network, compute, has to have APIs, has to have capabilities, basic capabilities that both your customers who want to build their own overlay, and then Nutanix services on top. Talk to me about, how do you make sure that you're building the best cloud platform to be consumed by cloud services, whether they're Nutanix cloud services or someone else's. >> I think, just comes out of the core principles that we have built the company around, right, that we will always build things around web-scale design, so it has to scale to very large deployments, it has to be completely distributed, it has to go through a certain amount of vetting, in terms of having APIs exposed. Nothing we do internally is through secret APIs, everything is public APIs, so you're pretty stringent on some of these things. And then of course, layering on the simplicity of Nutanix is another thing that we take very, very seriously, so when we do all that, nice patterns emerge. I think it lends itself to an elegance that the platform provides for the rest of the stack. >> So, then we get to a confusing abstraction, which is, you mentioned it earlier, containers. Who gets containers? Is that your organization, is that your organization? Is it a fundamental part of the foundation, or is it a cloud service? >> I think the trick is to not necessarily worry too much about the boundary here, because frankly, this is something that the industry is still figuring out, you know, what layer is this new Kubernetes thing at? And is it just at containers, but actually, now it's going into all the way, application provisioning, load balancing, distributed routing, all sorts of things, so that's, I mean, we work as a team essentially, and there's a whole bunch of engineers that are looking at the whole picture, it's always very important to look at the entire picture and then figure out what are the right layers to go solve the problem, and when you're looking at containers, the bigger problem that our customers are talking about is, how do you deal with the legacy plus the containers in one environment? Now, I have my application, it's a three-tier application. The database, I still want to run in a VM, right? But I want to start tasting this Kubernetes thing, so I want to go with my app, the web tier with containers, but it needs to be in one view, and that's what Calm demonstrated. Through Calm, you can orchestrate an application that's part VM, part containers with Kubernetes and help our customers transition. So which layer these things are, it's going to be an evolving answer. >> So Binny, I love that you started the conversation around Calm. Is Calm the first interaction that most customers will experience when it comes to Nutanix cloud services, or is there a different, one of the other services, the more likely first experience of cloud services versus the trivial compute, storage, network. >> Right, so the first cloud service that we have announced, that we'll deliver, is DR, right? I mean, that's the first one with Xi. Once DR is available, very quickly we'll add more services. Beam is another one that has to fold in to the Xi cloud services. When I say fold in, it essentially means you have the same identity, and you have the same billing mechanisms, and the same experience. You know, similar to when you go to a public cloud, you'll see, there's a host of services, and they're sort of equals, and you can pick whichever one you want to use. What we want to provide with Xi cloud services is that, the same experience, except that these services are now hybrid. You can have them on-prem, you can have it in the cloud. And our teams are building this hybrid view, some of which, the preview of it, what you already saw in the demo there, you saw availability zones on both sides, shown on one screen, now you'll see the service footprint on both sides, on one screen. >> Stu: Yeah, Rajiv-- >> From an experience point of view, I think, Calm will be how people who see this for the first time, that's going to be the center marketplace that we will have, that's where people will launch services from. >> Right, so when you, where's the portal for cloud services, and as I understand, Calm is that portal. >> Calm is a lot more than that, it'll have not just services but applications and workloads as well. But yes, the experience will start with Calm. >> When you talk about a hybrid cloud world in the platform, people are trying to understand what exactly lives where. When we hear kind of Xi, wonder if you might be able to give us kind of a compare-contrast of, say, that you look at VMware, and VMware and Amazon is kind of an easy one to understand, as it's relatively the same stack, just living in a different data center. >> So we're doing things a little bit differently. While we are building our own cloud data centers today, we're architecting it in a way that we're not tying it down to any single stack, that it has to be only a Nutanix-oriented stack. We absolutely intend to scale this out by partnering with service providers, with cloud vendors, and so on. You saw something in the keynote yesterday about running nested on GCP. You can imagine where that will go in the future, but the cloud's also on the radar. Much like we did with our HCI stack, we ship them Supermicro, but we're conscious of the fact that it's software that we can move anywhere. We are building Xi exactly the same. >> Yeah, and what I'd add is, while we are doing it in our own data centers right now, we are learning a lot, and as we are learning the things that are truly needed to make running a cloud easy, from an operational perspective, that allows us to build a product that is an honest product to give to our partners and service providers, say, now you go run it, and you won't be spending too much. For example, the experience that they've had with OpenStack, it cannot be repeated again, right? So that's what we want to do. >> So let's talk about the relationship with Google as a model going forward. Is that prototypical of what you're looking to do with other public cloud providers? And first, give us some color around that announcement, we have anyone on theCUBE talk about Xi and Google, and then kind of the strategy moving forward. >> A lot of the public cloud vendors are actually realizing that hybrid cloud is important, and as part of that, they're providing bare-metal services, and Google has its nested service, to enable others to bring their own stack, you know, virtualization stack, to run there. Amazon has done it VMware, Amazon has also announced their intention to gear bare-metal services. So we see a future where a lot of these public cloud vendors will offer bare metal, and that's where our Xi stack will run, and also giving customers choice to go from one cloud to the other seamlessly. Today, we know that Nutanix can move from public Xi cloud to on-prem and back, but once you have Xi cloud running on multiple cloud vendors and you can move between cloud vendors seamlessly as well. And that's a really compelling message for our customers. >> Great. One of the challenges for some of us watching is, you've got a pretty big portfolio now, and some of the things out in the future, it's like, okay, where does Nutanix fit, how do they have the right to participate in this? Wonder if you can talk a little bit about Era, and maybe Sherlock is a little bit further out. >> Era is about managing copies of your databases. Again, if you look at where a lot of cost is sunk in enterprises, running my database, a production database, for every single production database, there'll be maybe tens of test copies of it. What Era does is minimizes the cost of managing the copies, and also, it's thinly-provisioned copies. That's something that our customers have said that's a real pain point for them that nobody solves really well. So we decided to work on that, that's just a starting point of what we can do in this PaaS layer and also, helps us learn this space as well. We are reaching out to not the infrastructure admin, but actually to the database admin. It gives us a new audience to talk to as well. So from an audience perspective, we are broadening the scope, we are reaching closer to their lines of businesses and the decision-makers, which is good. Now, going to Sherlock-- >> Actually, if I could just, one quick followup on the database piece. Database migration's really hard. You know, talk to any customer and you say database migration, it's one of the things that strikes fear in them. Talk just for a second if you could about the expertise that your team has and why you believe you can really deliver that push-bus and simplicity that Nutanix is known for. >> Oh, so yeah, the team that's building Era are hardcore Oracle folks who have decades of experience doing those kind of hard problems, and they've come here with a mission, into Nutanix, that we are going to solve it. Using the Nutanix platform that we have built, there are so many things that can be done in a better way, and since we have a clean slate, we can start afresh and do it the right way. From our capability to do it in the right way, making it simple for our customers, we don't have a doubt. In fact, a lot of customers who have tested this in alpha, they have raving reviews on that, and they just want it as soon as possible. >> And on the database migration subject, we also have a group called SQL Xtract that we've been shipping for some time that helps you migrate your databases from existing three-tier or even hyperconverged stacks, onto Nutanix. So we have some expertise in the area already. >> So, a little bit on the, I heard the term copy data management. Is this mainly copy data management, or is this actually database migration to a new, to ability to move from one database to another one, or is it all of the above? >> So, it's doing management of copies, it's also allowing you to clone databases. So you can go to a snapshot and clone another one. Migration is not yet there, but it's a natural consequence of the capabilities that we have, because once you have snapshots, we have the capability of moving snapshots from one data center to the other using our DR capabilities. So that's on the roadmap. Further down the roadmap is database provisioning itself. If you want to provision a brand-new database, you can also do that, so these are the natural transitions of work, but what we wanted to do, just like what we did with Xi, start with the hardest, thorniest problem, and then work backwards into the simple things. >> Alright, so unfortunately, we're running short on time. Give us a closing word, I want, Rajiv and Binny, maybe you can talk a quick second about project Sherlock and give us some things that we should look for down the road from Nutanix. >> Yeah, so we believe that the world needs an enterprise cloud operating system. What that means is it can run on the private cloud, in the public cloud, and on the edge, and Sherlock comes there, I mean, it's taking our stack and creating a mini-PaaS version, as you saw in the demo, and running it at the edge in a way that all of your footprint appears like one dispersed cloud. And that's a pretty exciting space, and we think that is the key differentiator that we'll have going forward. >> Any final words, Rajiv? >> I think he covered quite a fair amount of ground, so yeah, thanks for having us on. >> Alright, well, it goes back to really that distributed architecture, the core. Appreciate having the conversation, the CTO roundtable, as it were. Binny, Rajiv, always a pleasure to catch up. For Keith Townsend, I'm Stu Miniman, back with more here. Thanks for watching theCUBE. (techno music)
SUMMARY :
brought to you by Nutanix. and how you look at the portfolio, Given the breadth of the ambition that we have, right, it's like the application done there. Well, the platform's always going to be there, right, So the problems are identical to what we had, Yeah, and sometimes birds just and now we have birds flying in. Talk to me about, how do you make sure that that we have built the company around, right, Is it a fundamental part of the foundation, that are looking at the whole picture, So Binny, I love that you started Right, so the first cloud service that we have announced, that's going to be the center marketplace that we will have, and as I understand, Calm is that portal. Calm is a lot more than that, it'll have not just services When we hear kind of Xi, wonder if you might be able to that it has to be only a Nutanix-oriented stack. and as we are learning the things that So let's talk about the relationship and you can move between cloud vendors seamlessly as well. and some of the things out in the future, and the decision-makers, which is good. and why you believe you can really deliver that Using the Nutanix platform that we have built, So we have some expertise in the area already. I heard the term copy data management. of the capabilities that we have, and give us some things that we should look for and running it at the edge in a way that I think he covered quite a fair amount of ground, distributed architecture, the core.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Townsend | PERSON | 0.99+ |
Binny Gill | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Binny | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Rajiv Mirani | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Rajiv | PERSON | 0.99+ |
New Orleans, Louisiana | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
one screen | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
Windows | TITLE | 0.99+ |
Both | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two layers | QUANTITY | 0.99+ |
one view | QUANTITY | 0.99+ |
first time | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first experience | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Linux | TITLE | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
2018 | DATE | 0.98+ |
one database | QUANTITY | 0.98+ |
Sherlock | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
first interaction | QUANTITY | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
single stack | QUANTITY | 0.96+ |
three-tier | QUANTITY | 0.96+ |
uber | ORGANIZATION | 0.96+ |
first one | QUANTITY | 0.96+ |
Supermicro | ORGANIZATION | 0.96+ |
Today | DATE | 0.95+ |
Azure | TITLE | 0.95+ |
Xi. | ORGANIZATION | 0.94+ |
Xi cloud | TITLE | 0.92+ |
one environment | QUANTITY | 0.92+ |
Kubernetes | TITLE | 0.9+ |
a decade | QUANTITY | 0.89+ |
one cloud | QUANTITY | 0.89+ |
tens of test copies | QUANTITY | 0.89+ |
second | QUANTITY | 0.89+ |
one OS | QUANTITY | 0.87+ |
Stu | PERSON | 0.87+ |
Xi stack | TITLE | 0.86+ |
Era | ORGANIZATION | 0.86+ |
PaaS | TITLE | 0.86+ |
Prism | ORGANIZATION | 0.85+ |
SQL Xtract | ORGANIZATION | 0.85+ |
Dheeraj Pandey, Nutanix | Nutanix .NEXT 2018
(upbeat instrumental music) >> Presenter: Live from New Orleans, Louisiana, it's theCUBE, covering dot .NEXT Conference 2018 brought to you by Nutanix. >> Welcome back to theCUBE, SiliconANGLE Media's live production of Nutanix .NEXT Conference here in New Orleans, Louisiana. I'm Stu Miniman joined by my co-host Keith Townsend. Happy to welcome back to the program, also fresh off the keynote stage, the founder, CEO and Chairman of the publicly traded, Nutanix. Dheeraj Pandey, thanks for joining us. >> Thank you, thank you for your time by the way. >> Dheeraj, it's always a pleasure. One of the things we say about theCUBE is we want to take those conversations that we're having at events in the industry and share them out, and we've had the opportunity to have many of them over the years. So to start off with, when you take us back, some of the keynote you say five years ago couldn't really predict what was going to happen now. Yet I talked in our open here, the first interview that we did with you back in 2012, talking about the challenges over time in distributed architectures, it's more real today than it was back in 2012. Cloud has matured and is a little bit more nuanced today. The application space is exploding and changing more than ever. I guess inside a little bit, when you talk about the vision that you had for Nutanix, any major learnings on things that have surprised you along the way? And what things have played out exactly like you thought they would? >> Well, let's start with the easy one, which is the way things have played out, what we wanted them to play out like. I think the idea of commoditization of hardware, the fact that things will become pure software, all these hardware devices should look like apps was one of our sort of big prognosis early on, like six, seven, eight years ago. And largely, everybody is talking about software-defined everything. And that's not to say that hardware doesn't play a role, it's just that it becomes more invisible in the sense that with software running on top, and the fact that you have economies of scale coming with standardization in hardware, a lot of things will move to pure software. That's really worked out well. Disaggregation has worked out well in our favor. The fact that you'd stop buying big things and you start small and pay zero. I think consumerization has really, I mean and this is a word that is a cliche in many senses but what does it mean to have consumer-grade experience of enterprise grade systems, which is a paradox in itself to say that if consumer-grade experience with enterprise grade systems, but I think that has turned out really well for us. And in the staying power of everything eventually is can you build reliable systems? Can you build highly available systems? I mean, because building trust with the enterprise is really hard, and there's lots of startups that has come and gone, that have over-promised and under-delivered, and I think that's one of the things that has really worked in our favor to be really methodical and robust with the way we build our systems, especially the backend systems. And it's showing up in the front end of the world. Surprise, surprise, I think the fact that it's mega distributed now, not just distributed because distributed over LAN is one thing, but distributed over WAN is a very different thing altogether and you need to really think about the basic tenets of computer science, about state and migration and caching. And a lot of this is coherency, consistency, availability, network partitioning, there's a lot of things you need to think about in a very different way than you used to think about on a LAN itself, so... >> Yeah, I want to drill into one of those things. The move to a software model, you and I talked since the early days and Nutanix, at its core, it's software that you do. Changing how people think and consume and boy, getting the financial arm of companies, your channel partners, your salespeople and your customers, that's a challenging piece there. There was one of the customers I've already talked to this week that said one of the things we always had was I buy stuff and you tend to over buy and you could never kind of shrink down. Now, I go to a software model, I have a certain piece of it that I really understand and then I buy, and I can even kind of dial back as needed. Maybe explain some of the nuances and some of those changes. You know, how's the field doing with this? How's the channel adopting to this, and any customer stories? >> Yeah, I think there's, especially for ambitious companies, there's always a Netflix moment, there's always an Adobe Omniture moment. Look at these companies 10 years ago, Adobe was a $3 billion company in 2007. But they said we need to dramatically look at consumption model as the big differentiator going forward actually. Even though they had digitized blockbuster and Hollywood videos and so on, they said it's not enough. We need to digitize even further. I mean, Apple had digitized music with $0.99 songs, but the music player itself needed digitization, and I think that's what happened with iPhone bringing a music player on as an app itself. Photographers digitized, you know because you could now do JPEG files back and forth in emails, but the camera itself made further digitization and the camera became an app. So I think there's multiple layers of digitization that needs to happen. I think as a company, we've digitized a lot of hardware devices. But as a company, we had to digitize ourselves even further. This is our digital transformation. The fact that you can consume Nutanix in ways that are even more invisible, the fact that you can try out Nutanix, kick the tires on Nutanix, run it in your favorite server that you want. And then after you like it, you call us. You know, all of a sudden, the sales funnel is warmer because whenyou look at sales funnel, you don't need people up there to really go do a POC and kick the tires and technology and so on. So software provides access, which is probably at the core of an operating system. If you don't have access, if you don't feel distributable, then you'll always stay at the mercy of the appliance gravity. Because appliance's gravity, it's hardware, you need to ship things as physical objects being shipped, there is logistics, there is capital expenditure, there's a lot things involved that really keeps you sort of anchored to the bottom. And the only way to unleash this is to really bring more digital delivery models, and software is one such thing. Now our sales teams like starting this quarter, are being paid on software only as opposed to on the hardware itself. And we're doing things in the channel that makes it really unique because the customer experience doesn't have to change. In some sense, we're really saying can we have the cake and eat it too? And that's what we're really doing so that the two north, which is customer experience, doesn't take any kind of hit while we can actually look at going and selling the value of software itself. And as you know about Xi, I mean, just doing software alone is just the first step towards digital transformation. And the further digitization is, when nothing is visible on-prem for Nutanix, everything is totally invisible and you can swipe a credit card, you can sign up in a matter of seconds, I think that is where the real epitome of digitalization will be for the company. >> So let's talk about the impact of becoming a software company. I love some of the stories that he says, the ability to download software and kick the tires. I've seen some really geeky stuff, people running prism on bare metal clouds, there's use cases that I didn't really consider. What are some of the more interesting things that your partners and customers are doing that you didn't expect? Like, what's the surprises? >> Well, it starts with the tinkerers. The most important thing about any good software company is tinkerers do things that you never imagined you could do. And it comes down to API, then it comes down to access. Like I have an app on my iPhone, it's called iBeer. Now Apple opened up its oscilloscope, its accelerometer, its compass, and now you can basically fill up beer in your iPhone and you can drink it and it burps for you as well. I don't think the company knew that when it opens up an API. You know, what other possibilities, what kind of apps people will build? I think community addition has been at the core of access for us. People can just download it on an Intel NUC and do things with it. In fact, the NUC is part of a drone now so you can actually have an entire data center in a drone, and the drones can replicate to each other and failover from each other. In fact, we're talking to a lot of very large oil and gas and remote vertical organizations, which are really looking for what does it mean to miniaturize a datacenter? And then at the same time, do very serious stuff in it, back it up, encrypt it, compress it, replicate it, all sorts of things, even put event processing. Like, how do you put a Casca bus on a mini PC-sized server, I think, palm-sized server? These are all the things that we hadn't imagined three, four or five years ago. But the fact that Nutanix can be shrunk wrapped into a palm-sized server, it takes this possibility to the edge, to the next level, actually. >> So the show floor is growing, you hit on API, critical part of building an ecosystem to becoming a true platform player. What are some of the more impressive parts of growth from (mumbles) ecosystem? >> I would bring it back to all the applications. We've done a tremendous job of applications on Nutanix. So if you look at north, south and east, west, I always look at things north, south, east, west, north, south is apps and hardware. So hardware platforms and apps on top of us. I think we've done a really good job with that. East, west, you know, look at data protection, business continuity, security. A lot of those companies are actually part of our overall ecosystem. And we still are not happy. I think we have to do an even better job. But what's the MuleSoft equivalent in infrastructure? Nobody thinks of integration in the operating system world today. It's mostly point-to-point. Okay, I am Nutanix, you're Arista, we'll do point-to-point. I am Nutanix, you are F5, we'll do point-to-point. What if there's a real event bus where you could just publish topics and you become a radio station? There's TiVo and because you can go back in time, look at three days ago what events happened and so on. There's a whole aspect of putting a multicast tree of events that becomes a real groundswell of integration between different kinds of appliances, virtual appliances, physical appliances, hardware below us, software above us. I think that has yet to happen in the industry. And a lot of our developers are now talking about like what's the MuleSoft for Nutanix? So I think there's a lot of innovation that infrastructure has not seen because we always think differently than apps. What if we thought like app companies? We'd do things like app companies. And you'd see us in the next couple of years do something really interesting with, build a system bus which are the pub/sub like model as opposed to a RESTful request/response-like model actually. >> Dheeraj, gives us a little more color on some of those partnerships. I've seen Google and IBM on stage in the past. You're now over a billion dollars in revenue, public company, so I have to imagine some of these companies treat you a little differently. And the ones I kind of initially want to hear of, but you're welcome to run with it is, the server players and the cloud players is, how you see, how much can it just be we do our thing and how much do they need to work with you? >> Yeah, absolutely. Well a billion is still a small number. We're more like VMware of '07. And VMware of '07 was still a test and dev company by the way. They hadn't done anything production at all. People are still tinkering with databases and Microsoft apps and so on in '07. So we are small, we're still not a very big company. I think there's a lot of headroom for us in the coming years. The thing is that we've taken the tougher route by the way. Tougher route being we didn't have to sell ourselves to EMC, which is what VMware did. If you think about it, that asset was worth 60 billion eventually. Was sold for 600 million. It was a 100x smaller price to EMC. Because they actually seeded the ground on go to market. (mumbles) It's really hard, we need EMC to go and really do the distribution in peace. And as a company, we said no, no, I think there is value in building go to market on our own. I mean, look at our cap table. Our cap table is clean, we have dual class, voting structure and things like that. The things that VMware would die for, looking at from a financial investor point of view that we have that they don't, because we took the tougher route to really come to build a business. Now if you talk about hardware companies below us, and when I said below, I don't mean pejoratively, but you know, the stuff that runs underneath us, >> Stu: Southbound. (chuckles) >> I think NX has been a great way to build a market because if you hadn't done supermicro, we wouldn't be here actually. I mean, this architecture would have been a child's play, a science project, foreign tinkerer, most of them what it has become over time because the server vendors took note. They said, oh you can actually come and displace me? I would rather work with you because there's a lot of value we can bring to the table as well. So in that vein I think what we've done with Dell, what we've done with Lenovo, what we are doing with IBM, Fujitsu, and what we're doing with HP's and Cisco's channel partners, there's a lot of regional love that's forming on the ground with HP's and Cisco's channel partners and sales people because sales people are less political than headquarters. And think about strategy tax that headquarter face versus what sales people do. Sales people, I just saw a tweet, I think you talked about an HPE sales guy saying, you've got to bring Nutanix to the table because they really respect market forces. For them, market forces are most powerful actually. And above us and in the cloud, I think definitely a lot of work that we're doing in Google GCP. But I think you know, as bare-metal opens up from these other providers, we probably would be very interested to see exactly how Nutanix Xi works in bare metals of these public cloud providers. >> So you guys disrupt yourself. There was NX, business was doing fine. You guys are starting to build a reputation to being able to support the large enterprise with NX, some of the logistic challenges that you had as a small organization you were starting to overcome, but you decided you know what, you're going to untether yourself, let's zoom out of the industry. If you looked at the industry and say you know what, the advantage that Nutanix has because we're willing to disrupt ourselves, what are the tethers that remain in the industry that you're happy to go before your customers and say you know what, Nutanix doesn't have these tethers and if we did, we'll easily disrupt ourselves again. What's the competitive advantage? >> Hmmm, I think it's a great question. In fact it is the competitive advantage to say that the glass is half-full and it's not a zero-sum game. Because there's two kinds of people in the world. There's the zero-sum mindsets people who actually always think that if somebody is winning, the other must lose. And then there's growth mindset people who actually feel like of course legacy will get disrupted, but the new guys will actually make further progress, future progress. So as a builder, there's a bias in me and many of us out there in Nutanix that you need to have a growth mindset. And then the growth mindset, just giving a software to an OEM partner doesn't mean that it will shrink yours. It's possible that there's going to be more word-of-mouth, and the market forces will actually appreciate that. I mean, eventually, if somebody had a great relationship with Dell, Lenovo or HP or Cisco or IBM, we'd love to do business with them. And we have to relax some constraints because at the end of the day, this is still not our cloud. Now in Xi we can do whatever we want. But when we're walking to the customer, saying we want to build a cloud with you, with you is an important work. It's not for you, it's with you. And with you would mean that we'll have to bend a little bit backwards to relax the constraints away. And that's exactly what we've done. No one else has done this. Same is true for hypervisors. I mean, look at VMware. We go in there and we don't start talking about VMware right away. Like you know what, let's talk about architecture, let's talk about migration, let's talk about security, automation, and some day we'll certainly talk about whether you need to pay for a hypervisor. I think we'll do the same things with data protection and other things we're doing networking and so on. We're not going to just come in and say this is us and nothing else matters. API is everything. I mean, think of consumer companies. They've always competed with their partners and they've done a good job at it. They're like, look, at the end of the day, Spotify, their competitor, Apple music competes with it, but I'm not going to not give them a level playing field. Google Maps, Apple Maps compete. Keynote, Number, Pages competes with Microsoft Office. And I think the best companies are very good at being comfortable. Amazon the retailer, they fulfill more than I would say half their things not from their warehouse but someone else's warehouse, and both parties make money, actually. It's the growth mindset that creates large companies. >> Dheeraj, you're a technical founder, have great success with the company. You know, it's still one of the things I've loved in our journey on theCUBE, is being able to document companies that we knew from the early days and got over 2,500 employees now. >> Dheeraj: Actually, more than 3,500. >> 3,500. Congratulations. As you talk to people in the Valley or your travels around the world, what advice do you give to potential future entrepreneurs, people that are sitting like you did in the early days and have a vision for the future? >> Well, I've gotten a little more philosophical about organizational building. At the core of companies that are building and growing over time, is how do you keep reducing friction? And it's not just friction with customers and partners, also friction within. Because orgs grow and you need to, if you look at organisms, you know we have mitosis where cells divide themselves and become smaller cells and even smaller cells and so on. There's a division of labor, there's specialization, there's all sorts of things that actually happen as organisms themselves. I think an org is like an organism. And over time, there's a lot of accumulated stress that develops. And if you don't really go and address it, you're not a company, you're basically a business that doesn't understand culture. So what I talked about with a lot of entrepreneurs is really fuzzy words like how do you become authentic in what you do? Like, I was in Bloomberg and I talked about the difficulties with Xi. At the end of the day, most people, maybe not the 10, 15, 20% impressionables, but most people appreciate authenticity. And we're like, that is vulnerable, and being vulnerable is the best way to build a relationship actually. So I talk about vulnerability and trust and organizational design and reducing friction and things of that nature because once you are so many people, it's all about reducing friction. >> All right, well Dheeraj, one of the things people I know love about this show is you bring speakers that get us thinking authenticity. Hopefully one of the reasons why you bring theCUBE to the event. So thank you so much for joining us again. Always a pleasure. >> Pleasure. >> All right, Keith Townsend and I, Stu Miniman, will be back with lots more coverage here of the Nutanix .NEXT Conference 2018 in New Orleans. You're watching theCUBE. (technorock music)
SUMMARY :
brought to you by Nutanix. of the publicly traded, Nutanix. the first interview that we did with you back in 2012, and the fact that you have economies of scale coming and Nutanix, at its core, it's software that you do. the fact that you can try out Nutanix, I love some of the stories that he says, and the drones can replicate to each other So the show floor is growing, you hit on API, There's TiVo and because you can go back in time, And the ones I kind of initially want to hear of, and really do the distribution in peace. But I think you know, as bare-metal opens up some of the logistic challenges that you had And with you would mean that we'll have to bend You know, it's still one of the things I've loved people that are sitting like you did in the early days and growing over time, is how do you keep reducing friction? Hopefully one of the reasons why you bring theCUBE of the Nutanix
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
2007 | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
Dheeraj Pandey | PERSON | 0.99+ |
60 billion | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
600 million | QUANTITY | 0.99+ |
Dheeraj | PERSON | 0.99+ |
$3 billion | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
two kinds | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
100x | QUANTITY | 0.99+ |
New Orleans | LOCATION | 0.99+ |
15 | QUANTITY | 0.99+ |
Xi | LOCATION | 0.99+ |
New Orleans, Louisiana | LOCATION | 0.99+ |
10 years ago | DATE | 0.99+ |
both parties | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
first interview | QUANTITY | 0.99+ |
three days ago | DATE | 0.99+ |
more than 3,500 | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
20% | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
MuleSoft | ORGANIZATION | 0.98+ |
six | DATE | 0.98+ |
this week | DATE | 0.97+ |
today | DATE | 0.96+ |
Casca | TITLE | 0.96+ |
SiliconANGLE Media | ORGANIZATION | 0.96+ |
Hollywood | ORGANIZATION | 0.95+ |
Nutanix .NEXT Conference 2018 | EVENT | 0.95+ |
Arista | ORGANIZATION | 0.92+ |
over a billion dollars | QUANTITY | 0.92+ |
Keynote | TITLE | 0.92+ |
One | QUANTITY | 0.91+ |
Google Maps | TITLE | 0.91+ |
Brett Ruth, BKD | VMworld 2017
>> Announcer: Live from Las Vegas, it's the Cube. Covering VM World 2017. Brought to you by vmware, and its ecosystem partner. (electronic music) >> And we're back, this is SiliconANGLE Media's production of the Cube. I'm here with Keith Townsend, I'm Stu Miniman. Keith, I don't know about you, but one of the things that really excites me when I get to come to events like this is talking to the users, talking about the practitioners, what they're using, how they're using it. And so I'm really happy to welcome to the program, first-time guest Brett Ruth who's the server, storage, and virtualization supervisor at BKD. Brett, thanks so much for joining us. >> Thank you for having me. >> Alright so BKD. I know you're big in your field, but there might be some people out there that aren't familiar with your organization. Maybe just give us the thumbnail of the company, how long you been there, and your role there. >> Sure, BKD is the number 12 accounting firm in the United States, 36 offices, net revenue 564 million. Tax audit, corporate finance, wealth advisors, technology services, that's BKD in a nutshell. >> Alright, and your role in the organization? >> My role is kind of the server supervisor. I have a team of seven assist admins who report to me. We take care of anything on the Windows server to Lennox server, to our Nutanix environment, our Vmware environment, our IceLAN storage environment, and all the applications that live on those. >> Alright, so Brett, one of the things I'm sure you'll find, your stuff doesn't change, you don't acquisitions to integrate, you don't have new technology being thrown at you all the time. If sure every year they just say how much more budget and how many more people do you want. >> Exactly >> So bring us in the reality. What's your world like? What are some of the big challenges? I'd say first if you can from just kind of the industry standpoint, and how does that impact what you're doing? >> Sure, so BKD is a growth firm so we look at business acquisitions when we can. We look at those, we actually completed one not that long ago in Chicago. We expanded there. So each one's always different, you know different technology. Some of those acquisitions are a couple servers, some of them are completely cloud-based, some of them are mixed in between. So having a platform where we settle on with Nutanix has kind of helped be able to make those integrations a little bit easier. But no, every year budget cycle comes around, and what's the initiative the firm wants to do. And every year it's different. It's fun, you know it's challenging to have different and new things we have to tackle every year. >> So when choosing these platforms, one, quick question around the organization, a bunch of knowledge workers. How many, what's the head count? >> Brett: Around 26,000. >> 26,000. So you guys and your IT organization, you work for the bean counters of bean counters. (laughing) So they understand ROI, TCO. When it comes to selecting these technologies, how much pressure are you under to do less for more, and prove that you're doing, I'm sorry, do more with less, and prove that you're doing more or less? >> Sure, and that comes up during the budget cycles. I mean there is a large amount of time that is spent of what's next year's initiatives? What does that server landscape look like? You know, does there a new product that comes out that requires a head count increase or not? Or is it a new application we need to stand up? And every year that comes around, and the questions come. Well, maybe the firm didn't have a good year, maybe the firm had a better year. So we, you know, the budget gets adjusted based on that. But more times than not, the firm recognizes that putting money into IT does nothing but help the business grow. So as long as we spend it wisely, we usually can, we can get accomplished. >> Alright Brett, I want you to take us inside, you know I hate to do it, but the budgeting thing. Cause one of the promises of, you said you're using Nutanix, used to be okay this year, oh it's time for the server refresh, next year, wait, no, you don't have any server budget, you know, we're doing some storage ad ons, or things like that. You might get some budget here or there if you need it, or if there's an emergency, but you got to justify that. The promise of a pool of resources should be, well I'm consolidating a number of pools, and therefore, I should be able to be more agile, more flexible, I'm buying in smaller chunks, rather than bigger chunks. What's your experience been on kind of that purchasing from that relationship with the finance side of the business? >> Sure, so when I started BKD, I've been there about five years, it was a traditional three-tier architecture when we rolled into it. And the firm was growing at such a rate that we were running into those physical limitations of the hardware. And it's never a fun game to go ask the CIO an unbudgeted SAN purchase you know. Do that a couple of years in a row, and it gets harder and harder to ask those questions. So we finally came to a point as a company of we need to do something different. And, you know, through research and product I had, and my team all had to do to accomplish it, we landed on Nutanix, and we landed on a hybrid converge infrastructure. And what we can do is we build those quote unquote lego blocks, so now there's not a big, giant purchase of a SAN or a new set of UCS Chassis or whatever the product might be. It's a, I know this quarter I need this amount of nodes, or I know for this project I'm going to need this, and I can just build and add on when I need to. So it makes the budgeting and those unbudgeted purchases a lot more easier to take. >> So much of the messaging from day one, day two is aimed kind of at you. You're on the ground, you have to deal with not only the engineers that implement the technology, but also the executives that approve the purchases. So a lot of the messaging here has been for you. How have you received it, and what's your impression of Vmware's messaging around, take your favorite topic? >> Right, you know a lot of cloud talk's been happening here and a lot of DevOps has been talked about here, and a way to improve that. BKD has an internal IT development team, so a lot of those things I can take away here, and try and see if I can help our Dev team however I can. A lot of the messaging is just seeing where the industry is going, not just Vmware, but everyone on the solutions floor. I mean that's a lot of my time here is research and seeing what products that I know we have to complete in the next fiscal year or two, and then what products are out there that I can just buy. >> Alright, can you bring us into your application portfolio? What sits on the Nutanix platform, what doesn't? I hear you said you got a scale-out NAS platform also. You talked about some developers there. I'd love to understand how you figure out what goes where, where you are in building that out. How many nodes you have if you can share? >> The IceLAN is six nodes in each data center, the Nutanix is 26 nodes in each data center. We're probably 99.9% virtualized. The only thing I think we don't have virtualized is we still have a physical domain controller outside of both just from shear, if everything is off, I have one point I can get back into, right. But exchange, sharepoint, our sequel is all virtualized. The IceLAN is really kind of the unstructured file pool that we can put map drives, we can put blob storage from our sharepoint environment lands onto it. Flat files from our sequel land onto it. And, yeah, everything runs on our Nutanix. >> So going into that developer relationship, you know Nutanix, I've talked to these guys before about their ideal of being a cloud company. So developers, when they hear the term cloud, what's the impact of you, on your role, when you have Nutanix, a cloud company, and your developers asking for cloud? >> It's a interesting question because we try and phrase it as BKD, we now have an internal cloud, we have an enterprise cloud, you know the term private cloud. And we can provide those instant resources to DevOps when they need it depending on if they have a new set of QA boxes that need to be stood up. But you know there is some projects that we're looking at of is it AWS or is it Azure or is it Google's cloud. Are there things that make sense to go out there versus keeping 'em in house? And those come up as an as-need basis. >> So DevOps, so (laughing) When we talk about DevOps, what are the pain points that you guys, cause that's a big topic. Do I go all the way as far as Netflix and DevOps all the things that we say, or what have you guys targeted to say, okay, here's where the value add is in the enterprise? >> I think we're still, that's still of of those things that our development team's looking at. I think it really depends on the application and what the business is looking for. I mean there's been some products internally that the team's released that makes sense to stay on Prim. The next project I find out a month from now might be something that's perfect for the cloud. I think they just take that on a kind of case-by-case basis. >> Alright, Brett, you've got a portfolio of partners that you're working with here. What's on your list of to-do's for them? What are you looking for from the ecosystem to make your life easier and help? >> Always looking for more stable code releases. I think any engineer would love stable code releases. You know for the most part everybody gets that. We're always going to have issues. >> Anybody you want to call out for not giving you stable code releases? (laughing) >> I can say everybody because, I mean everyone will do that. No, I think it's continuing to improve the product, continuing to make it. It's that do more with less right? I can't have two or three dedicated people working on the virtualization environment. They have to be multi-skilled you know. My team that I have, my seven assist admins are all great, probably some of the best guys I've worked with. We all have to wear multiple hats, even sometimes maybe we don't want to. So having those products come into the environment that make it easier for them, and then just seeing how those code releases come out, that would just make our lives even better. >> Just real quick, can you say whose hardware your Nutanix is on? >> It's Supermicro, it's from Nutanix. >> It's the basic things. This morning the keynote got a big laugh talking about some of the coope-tition that goes on just between Dell, EMC, Vmware in some of their partnerships. Some of your partners get along better than others. Is that something that impacts you, something you think about at all? >> It's definitely, being a Nutanix guy coming into VM World this year has definitely been an interesting experience. It's that cohabitation that happens between the two. But at the end of the day, I still have severs to run, I have an environment to maintain for BKD, and you know, if I need something done, I know I can go to them, and they'll help work with me on it. >> So the show floor this year, Vmware just as massive as it's been all-- >> Yeah. >> Vmware is all about the ecosystem. How important is this large ecosystem to your everyday operations of you environment? >> I mean it's the never knowing what the next project that comes out, or the next scene the business wants to do, or the next acquisition comes up. Maybe there's a product that I don't have in house that needs to take care of it. And then having this many vendors that I can go and talk with over these couple days has been great because I can now go back to the team and go, man I didn't think about this, and this product would help solve that. Or two months from now something comes around, I go oh yeah I talked to these guys, and go flip through the business cards and the paper stuff we take home and call 'em up. >> I love that even as a Nutanix customer, the Vmware, the coope-tition, that you still find value in the overall-- >> Brett: Oh yeah absolutely. >> Brett, any either announcements or kind of new things coming out in the market, anything catch in your eye? You said you were bringing that back to the office. >> Forgive me but I can't remember the name. The malware kind of virus scanner that Pat was talking about yesterday. That kind of really was a, being able to use that AI to figure out at a base level what milicent code is and isn't was, it would be an awesome game changer if it works out how it looks to be. >> Absolutely, no shortage of new things to look into. Brett Ruth, BKD, really appreciate you sharing your viewpoint everything going on inside. Really appreciate you coming on. >> Thank you guys. >> Hope to catch up with you sometime in the future. For Keith Townsend, and I'm Stu Miniman, we'll be back with lots more coverage here from VM World 2017. You're watching the Cube. (electronic music)
SUMMARY :
Brought to you by vmware, and its ecosystem partner. of the Cube. how long you been there, and your role there. Sure, BKD is the number 12 accounting firm and all the applications that live on those. and how many more people do you want. from just kind of the industry standpoint, It's fun, you know it's challenging to have one, quick question around the organization, So you guys and your IT organization, So we, you know, the budget gets adjusted based on that. Cause one of the promises of, you said you're using Nutanix, So it makes the budgeting and those unbudgeted purchases You're on the ground, you have to deal with A lot of the messaging is just seeing where I'd love to understand how you figure out what goes where, The IceLAN is really kind of the unstructured file pool you know Nutanix, I've talked to these guys before But you know there is some projects that we're looking at and DevOps all the things that we say, that the team's released that makes sense to stay on Prim. What are you looking for from the ecosystem You know for the most part everybody gets that. They have to be multi-skilled you know. some of the coope-tition that goes on just between But at the end of the day, I still have severs to run, How important is this large ecosystem to your I mean it's the never knowing You said you were bringing that back to the office. Forgive me but I can't remember the name. Brett Ruth, BKD, really appreciate you sharing Hope to catch up with you sometime in the future.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Townsend | PERSON | 0.99+ |
Brett | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
BKD | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Brett Ruth | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
36 offices | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
99.9% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Vmware | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
26 nodes | QUANTITY | 0.99+ |
each data center | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
DevOps | TITLE | 0.99+ |
this year | DATE | 0.99+ |
564 million | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
Las Vegas | LOCATION | 0.98+ |
VM World 2017 | EVENT | 0.98+ |
three-tier | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
six nodes | QUANTITY | 0.98+ |
UCS | ORGANIZATION | 0.97+ |
next fiscal year | DATE | 0.96+ |
seven assist admins | QUANTITY | 0.96+ |
about five years | QUANTITY | 0.95+ |
Azure | TITLE | 0.95+ |
IceLAN | TITLE | 0.94+ |
day two | QUANTITY | 0.94+ |
Netflix | ORGANIZATION | 0.93+ |
This morning | DATE | 0.93+ |
VMworld 2017 | EVENT | 0.93+ |
26,000 | QUANTITY | 0.93+ |
one | QUANTITY | 0.92+ |
Around 26,000 | QUANTITY | 0.9+ |
day one | QUANTITY | 0.88+ |
12 accounting firm | QUANTITY | 0.86+ |
Windows | TITLE | 0.86+ |
first-time | QUANTITY | 0.81+ |
two | DATE | 0.78+ |
three dedicated | QUANTITY | 0.77+ |
couple of years | QUANTITY | 0.75+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
vmware | ORGANIZATION | 0.74+ |
Sunil Potti, Nutanix - Nutanix .NEXTconf 2017 - #NEXTconf - #theCUBE
>> Announcer: Live from Washington, D.C., it's theCUBE, covering .Next conference. Brought to you by Nutanix. >> Welcome back to Nutanix dot Conf, everybody, sorry, .NEXT Conf, hashtag NEXTConf. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante, and I'm with Stu Minamin. We go out to the events, we extract the signal from the noise. We have a real treat for you, the day two keynote was done by Sunil Potti, who's the head of product, Chief Product Officer in development at Nutanix, long-time CUBE guest. Sunil, good to see you again. Thank you for coming on. >> Yeah, thanks Dave. Good to be here. >> Agree with Stu, great energy in the keynote this morning. Have to say, we got in there, Stu, at around ten of, the place was not packed, but by the time you started, the play was packed. We heard the Lenovo party went till like two a.m. >> Stu: The extent of it. There was some excitement. >> People came in and, yeah, lot of cheering, so you must feel pretty good about that. >> Yeah, now I think, I was telling some of you guys, I think right now we are in our growth years where it sort of feels like we've got a lot of fans because folks want to sort of relate to technology companies that foundationally are disruptive and keep disrupting. It helps folks' careers, it helps their personal lives, everything, right? So, the wipe that we get out of these .NEXT conferences are mostly about this, well, not so young people, they feel young using the technology at work and while at the conference, so that's, I guess, part and parcel of some of the, sort of the excitement that you're seeing probably. >> Well, so let's get into it. I mean, a lot of announcements this week. Where do you want to start? >> I think, that's a big question. There's a lot of stuff. >> Maybe we start with the strategy, which you laid out two years ago, we're not just HCI, we're goin' cloud. So that's sort of the set-up, and you've had a number of proof points and product announcements this week that underscore that strategy. >> I think I'll break this down into four parts, just to structure it, which is, obviously we've moved way beyond HCI, we subsumed virtualization, and then, we said cloud, if you remember 2015, and then we improved on it in 2016 and so forth. The big thing was every few years it gives you an opportunity to truly offer a step function change in transformation with the company, and that's 2017 for us is. That's why the announcements are all so packed, it's not just an incremental set of things. Let me break it down into four segments. One, the first thing is the fact that it truly is clear to us and to our customers that this needs to be this single software-centric fabric that gives you optionality of any hardware platform, any hypervisor, any consumption model, whether it be a pay-as-you go, it could be appliance-driven, it could be pure software ELAs and so forth. And then, obviously with our partnerships with Dell, Lenovo, support of Amazon and Azure, and now Google in a big way from the public cloud side, and I'll come to, but then extending that to our software support on Cisco and HP, and now with IBM, power especially, changing the game in terms of an enterprise worker. The first sort of segment of capabilities is to really make it look like our platform story is becoming an OS that cuts across, if I can call it, all of our applications, deployment full factors while being open. So that's the first category. Let me just maybe quickly summarize, and then we'll come back. >> Perfect. >> The second is the fact that, look, while we we're doing that, we were still taking an infrastructure-centric view, whether it's VMs, containers, whatever it is. So now, what we've found is that fundamentally we need to change the operational construct, elevate it to be an application-centric work, and that's where Calm comes in. Independent of hybrid clouds, just from a private cloud side itself, even if I'm just elevating my current infrastructure to a private cloud, an app-first automation is a good thing, and then, what we've done is in that second category is to merge, quote-unquote, on-prem infrastructure with off-prem using this multicloud thing. That's the second thing, that's Calm. The third thing is the fact that, well guess what, while we can bring provisioning and operational convergence with Calm, true lift and shift is still very high because the stacks on both sides are different within public cloud and private cloud, and that's where Xi comes in, which is our new cloud services. Essentially, it replicates your on-prem stack and seamlessly extends your data center so you can do some things like one-click DL and so forth. And the last but not the least was as we were building Xi, and we'll get into it, Google and Nutanix have started getting really close from a technology integration and a delivery perspective where things like Xi could become ubiquitously delivered, but more importantly, I could take Google Cloud past services and fuse them with Nutanix enterprise solutions. So, those are the four. >> Sunil, let me poke at something. We've heard this story before, broad, lots of choice, let's build an ecosystem, but AHV seems to be a strong component, so Xi, you got to use AHV, you got to use Nutanix replication. It's like you've got lots of choice unless you want to use all these cool things we're doing, in which case you want the full Nutanix stack. >> And I think that's a constant thing for us is, look, at the end of the day, to us lock-in should be by choice, not by need. It's up to us to offer a series of value-added services on our full stack, but customers choose to go to that full stack for value. It's just like on-prem, right? I mean, look, 60, 70% of our enterprise workloads are still ESX-based, Hyper-V is still there, under the cover SuperMicro or NX is no longer 100%, it's coming down. Frankly from a customer perspective of deployment, that's where they start, maybe they'll stay there. We can add value to that environment, but if you use AHV, you get micro-segmentation for free, it's one click. It's up to the customer to choose to use AHV based on the choice at that point. And the same thing applies to Xi as well, saying, look, on day one it'll replicate on-premise to off-premise, but on off-premise, at least on the enterprise side, Stu, we will still support ESX and Multi, you know, essentially the open environment is still a source for us. The target, on day one at least, we can build an honest product without being completely integrated. >> And Sunil, definitely there's choice there, even look at the Google announcement. Well, they've got the HV solution tour if you want to go down the containerization, Kubernetes, that's going to be somewhere. We've been positing that the partnership with Google should help you accelerate that move towards containerization. We know it's early for a lot of customers, but any commentary you can give on virtualization? >> Absolutely, and in fact, I would say containerization sort of expanded to the broader cloud-native workloads aspects, which is to us the big thing that customers came to us, and we've seen that now resonate as they just don't want DR. Obviously, one-click DR is a big deal, if you can pull it off by replicating the stacks. But really what they want from the enterprise side is they want to take enterprise apps that they are elastic, consume them as a cloud service, but co-locate them in such a way, not just in the data center level, but at an application-operations level, application-integration level, so that they can co-reside as if they were on the same node with a set of Google past services. So essentially, think about it, I've done all this work to do one-click DR, I have mode warehouse management system running SQL server or something else into the Nutanix cloud, that's a little bit easy now with Xi, but now, I can now run a big query app, I can honestly use those services as if they were in the same VLAN, and that's the real power of isolating containers. >> I love that, and I guess the easy compare on that is later this year we expect VMware on AWS to run. I'm sure you would posit that VMware and Amazon, you know, pricing might be a little bit different than the Nutanix and Google offering and what services and how you have them, a little different. >> I think, I'm sure you get a lot of responses on that one, but I'll tell you my take is, actually, even before comparing the approach. First of all, just philosophically, I think the strategy it makes sense of trying to make hybrid invisible, in general, right? I mean, vCloud frankly was the first cut at that. The way we look at it was vCloud was the right use case, wrong implementation. And to me, I think that is still the most important thing, which is before we build this hybrid cloud, whether it is with AWS or Google or anybody else that we've talked to and over at Nutanix or VMware, you still have to build a proper private cloud, as in the cloud that powers your primary data centers needs to look like a true Google or AWS. And to us, unless someone re-engineers that stack that's going to not be the same as, oh, even if I took, if I can call it, a half-baked fiber cloud, and I extended it as a service, it's still half-baked hybrid cloud. That's going to be the big thing, I think, that's going to turn things up. >> And that led us at Wikibon to coin this term, "true private cloud." We get a lot of grief for that term some time, but we saw a lot of cloud-washing, and the concept is basically to substantially mimic what's in the public cloud on-prem, and then, create a control plane that spans multiple physical locations. >> Sunil: That's right. >> Now, one of the things I've been getting a little grief on in this show, 'cause we think Nutanix is an instantiation of what we call true private cloud. One of the folks in our community who has been hitting on me, and he actually wrote a piece, this guy, Yaron Haviv, he's a very sharp guy, said that guys like you and others have no chance against the public cloud because he said this, "HCI is stateful," I want to read it and get your feedback. "HCI is stateful, its VMs, its vdisks, "they're like IT pets with a lot of labor. "AWS is stateless with micro-services, "its object, its database is a service, AI is a service, "and it's built for devs." I know you understand this, but how do you respond? >> I think, look, you know, frankly, we had a choice as a company a couple of years ago when we were growing, the primary question for this company was, well, if the apps are moving outside, forget about IAS or AWS, it's us, and users are moving outside, again, forget about dev and object store and all that. It's about consumerization because of mobility, everybody's got a phone, they can access an app. Who worries about infrastructure? As an enterprise infrastructure company, that's a secular question to answer, right? For us-- >> It's profound. >> For us, there's a reason why we didn't pack up and sell the company and move on. There's a thesis for the company. The thesis for the company is that we fundamentally think that cloud is not a zero sum game. And we're seeing that not just with the largest assets, like all the big dot-coms that went with cloud, defined cloud, and they're coming back now. One of your popular, whether it's your, you can call it your consumer devices that actually started with their service completely, as they grow, they're actually moving half of their services back to the private cloud. I think but it applies to the mainstream enterprise, which I call the fact that because public cloud is not a zero sum game not because of security or compliance, because those are temporal things, in my opinion. The moment AWS puts a data center right beside my data center, security is a little bit, compliance or data regulation is a little bit avoided. The real reason is purely going to be a financial choice for the kind of workload. If it's a predictable workload, even if I could re-engineer it, if it's predictable, it's kind of like me coming in, staying here in D.C. for three days, I'll rent a hotel. If it's there for a year, I'll lease an apartment. If I'm there for five years, financially, accel makes certain math deal, right? Doesn't matter if my costs are cheap, it's going to just work that way. I can buy hardware, cap excise it, amortize and so forth. I guess the simple answer to it is eventually I think there'll be 100% of the market will move. In some markets, there'll be 70-30, in some markets, it'll be 30-70. We are in 1% of that market, so for Nutanix, the more someone tastes the wine of public cloud, the faster they'll actually make the transition of their private infrastructure to look like Amazon. And in that sense, we are like, look, that's why want to do Xi is because Xi actually takes DR infrastructure away from us. We're probably making more money selling Nutanix in the DR data center when we are accelerating the move because we think that that eventually accelerates every workload to come through the primary infrastructure in Nutanix. At the end of the day, it's going to be not about objects, which is vdisk, I mean, he's right also, in the sense that shame on us if we don't have a level of abstraction that is app-centric, then you don't have really care about whether it's an object storage or vdisk or anybody, so that whether it's a developer or an IT operator, they use the same operation levels. >> Sunil, we like that Nutanix is putting out a vision, my understanding, you know, the cloud service next year? >> Sunil: Yeah, early next year. >> What I'm a little worried about is, we're almost out of time with you, and you went through so many different pieces. There of course, there's the Calm. We're going to talk to Aditya in a little bit, but maybe give us some of the highlights as to the stuff that's shipping now or soon that your customers have been talking about. >> What's happening is basically you take those four segments, the core software fabric evolving across every platform and so forth. There's a bunch of stuff that has started shipping in the 5.1 release which came out a few months ago, and a lot of it was shipped in the 5.5 release that's coming later in the year. Calm is part of that release, as you guys know, it's part of the same, it's baked in. We had a choice, so basically Calm has been engineered a couple of times over six, seven years. It's not a vulnerable product, but we took the time from last year, we didn't release it, create a Frankenstein, a power-sucking alien on the side, like some other tools. We took the time, to be honest, to integrate it into the console. You saw it, I mean, it's apps, it's taken time. Functionally, it's there. But that'll be part of the same release, and then, the Xi project will be early 2018. The Google integrations will come in a staged way, even as early as later this year with Calm and Kubernetes and so forth, and then extending to Xi. So the timeframe that we're talking about is probably minus three month to plus nine months. >> So, the DR solution that you showed, though, that is native Nutanix tech, right? That's not partners in the ecosystem. >> It's Nutanix software delivered as a full stack, and this is another thing that we had to take a hard call. It is raging debate couple of years ago is like who builds a public cloud in these days? Because that'll be the obvious question. And the question was really, it's not about building a public cloud, are you building a cloud service, whether it's on-prem or off-prem, are you being honest in the product design? Do you do billing and metering in one click? We don't do that today. But as building a cloud service through Xi, we are building all those for our private cloud customers. And so, the goal was, kind of like six years ago, the easy answer was to take Nutanix, sell our software and say we are VMware for the new era, not worry about BIOS firmware, upgrades, and all that stuff. And so, until we did Nutanix on SuperMicro, people actually believed that the market existed, and then, Dell and Lenovo and others came to buy. Same thing applies with Xi in an accelerated way. The moment we have the conviction to go build a full stack, make it look like an exact replica, but build these cloud capabilities, that's the big thing that I think some of our competitors didn't do was they took the same stack that was in the cloud and quickly tried to make it a cloud service. That's the reason why we are starting with our own service and data centers, and then, scaling through Google, for example, as a way for not just get global reach, but also to merge cloud-native apps with these enterprises. >> But the other obvious question, and I'm sure you guys had that conversation about this internally is, some of the folks in our ecosystem are in that space, and is this competitive to what they're doing? You guys have always been a customer problem solving company, but what was the conversation like in that regard? >> It hasn't come up, to be honest, I think the cloud service aspect is still relatively new, it's siloed, I mean, we are pretty clear. This is not an end-all, be-all service. We don't want to overreach from our things. It's about, look, you first move to Nutanix on-premise, and it's your choice to say, it should be as simple as, "I provision VMs, and the more VMs I have, "or containers on Nutanix, I should right-click "and say protect." And just like iCloud for the iPhone, they're in the cloud, right? That's the goal, the true product goal. >> That Amazon-like experience that you're describing. >> And it's a service that's integrated into it. And so, when we have partners who have been hosting Nutanix, if you're talking about service providers, you've already had questions on, rather than just create yet another service provider partner program that automatically conflicts at all, eventually what we think is that we will offer turnkey services, not a general purpose infrastructure. There's always use cases where somebody wants to outsource their primary data center, and that's where a lot of our partners will be, especially the mid-tier partners are in the mode of outsourcing the full data center, and for them, offering one-click DR on their primary outsource thing is another advantage. >> He's too good, we're not going to let him go yet. >> Sunil: No, no, keep going. >> Are we done? >> Central, some of the other announcements outside of the cloud, can we just touch on some of the highlights there? >> They're all part of the core fabric, and unfortunately, they get subsumed into the, you know, when you have big announcement, they kind of subsumed. But a couple interesting things that came up is I think, and this particularly I'm pretty fond of couple of features. One is Acropolis File Services. I made this joke about killing an air app a couple of years ago, and people keep reminding me about it. I think it's one of our fastest growing features in the last six to nine months, and it's a natural sale for us. People go in there, it's been a little bit limited because of SMB functionality, but now with NFS, it opens up that. Another one, though, that's probably more secular is this concept of machine learning now coming into mainstream. People talk about AI and all that, and it is a lot of churn for us, but it all comes down to manifesting itself as tangible things that the customer sees. For example, if I can show that certain amount of down time genuinely reduced from four nines, it became five nines, without the operator having to do anything, then it is becoming intelligent. I think our bar for machine learning, which you will see more and more, by the way, as a secular thing, I think, I'll predict that not just from Nutanix. >> Dave: Sure. >> AI, if I have to use the word, is going to become more prominent in the next few years, just like cloud was. >> So, Sunil last question I have for you. From a development standpoint, where won't Nutanix go? >> Where will we not go? Great question, so for example, there's an obvious thing like AWS. It says let me instrument every service that sits on my platform. If I like that service, I might end up doin' it. The classic, if I can call it, the tyranny of being a platform partner. See, for us, I think because of two things. One, our ceiling of our current market is super high right now, Stu, right? I mean, we still, yeah, it's a billion bucks, growing 50, 60%, whatever, but we're still, even if half the business goes to AWS, Google, and Amazon, we can still be a company larger than we were. >> It's nearly a trillion dollar market. >> I mean, it's a big market, so therefore, we don't have to be greedy about near-term. We can be long-term greedy. >> Dave: Alright, we got to go. >> All right. >> Thanks so much for coming on, we really appreciate it, Sunil. Alright, keep it right there, everybody. We'll be right back with our next guest, right after this short break. (electronic keyboard music)
SUMMARY :
Brought to you by Nutanix. Sunil, good to see you again. Good to be here. but by the time you started, the play was packed. Stu: The extent of it. lot of cheering, so you must feel pretty good about that. sort of the excitement that you're seeing probably. I mean, a lot of announcements this week. There's a lot of stuff. So that's sort of the set-up, that this needs to be this single software-centric fabric And the last but not the least was but AHV seems to be a strong component, so Xi, And the same thing applies to Xi as well, We've been positing that the partnership and that's the real power of the easy compare on that is later this year as in the cloud that powers your primary data centers and the concept is basically One of the folks in our community I think, look, you know, I guess the simple answer to it is eventually and you went through so many different pieces. Calm is part of that release, as you guys know, So, the DR solution that you showed, though, That's the reason why we are starting with And just like iCloud for the iPhone, are in the mode of outsourcing the full data center, in the last six to nine months, more prominent in the next few years, just like cloud was. From a development standpoint, where won't Nutanix go? even if half the business goes to AWS, Google, and Amazon, we don't have to be greedy about near-term. we really appreciate it, Sunil.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Yaron Haviv | PERSON | 0.99+ |
Stu Minamin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Sunil | PERSON | 0.99+ |
Sunil Potti | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Washington, D.C. | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
second category | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
D.C. | LOCATION | 0.99+ |
2017 | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Calm | TITLE | 0.99+ |
first category | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
1% | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
vCloud | TITLE | 0.99+ |
this week | DATE | 0.99+ |
six years ago | DATE | 0.99+ |
two a.m. | DATE | 0.99+ |
Sunil | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
ESX | TITLE | 0.99+ |
early 2018 | DATE | 0.98+ |
Acropolis File Services | ORGANIZATION | 0.98+ |
second | QUANTITY | 0.98+ |
five nines | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
early next year | DATE | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
four nines | QUANTITY | 0.98+ |
Raja Mukhopadhyay & Stefanie Chiras - Nutanix .NEXTconf 2017 - #NEXTconf - #theCUBE
[Voiceover] - Live from Washington D.C. It's theCUBE covering dot next conference. Brought to you by Nutanix. >> Welcome back to the district everybody. This is Nutanix NEXTconf, hashtag NEXTconf. And this is theCUBE, the leader in live tech coverage. Stephanie Chiras is here. She's the Vice President of IBM Power Systems Offering Management, and she's joined by Raja Mukhopadhyay who is the VP of Product Management at Nutanix. Great to see you guys again. Thanks for coming on. >> Yeah thank you. Thanks for having us. >> So Stephanie, you're welcome, so Stephanie I'm excited about you guys getting into this whole hyper converged space. But I'm also excited about the cognitive systems group. It's kind of a new play on power. Give us the update on what's going on with you guys. >> Yeah so we've been through some interesting changes here. IBM Power Systems, while we still maintain that branding around our architecture, from a division standpoint we're now IBM Cognitive Systems. We've been through a change in leadership. We have now Senior Vice President Bob Picciano leading IBM Cognitive Systems, which is foundationally built upon the technology that's comes from Power Systems. So our portfolio remains IBM Power Systems, but really what it means is we've set our sights on how to take our technology into really those cognitive workloads. It's a focus on clients going to the cognitive era and driving their business into the cognitive era. It's changed everything we do from how we deliver and pull together our offerings. We have offerings like Power AI, which is an offering built upon a differentiated accelerated product with Power technology inside. It has NVIDIA GPU's, it has NVLink capability, and we have all the optimized frameworks. So you have Caffe, Torch, TensorFlow, Chainer, Theano. All of those are optimized for the server, downloadable right in a binary. So it's really about how do we bring ease of use for cognitive workloads and allow clients to work in machine learning and deep learning. >> So Raja, again, part of the reason I'm so excited is IBM has a $15 billion analytics business. You guys talk, you guys talked to the analysts this morning about one of the next waves of workloads is this sort of data oriented, AI, machine learning workloads. IBM obviously has a lot of experience in that space. How did this relationship come together, and let's talk about what it brings to customers. >> It was all like customer driven, right? So all our customers they told us that, look Nutanix we have used your software to bring really unprecedented levels of like agility and simplicity to our data center infrastructure. But, you know, they run at certain sets of workloads on, sort of, non IBM platforms. But a lot of mission critical applications, a lot of the, you know, the cognitive applications. They want to leverage IBM for that, and they said, look can we get the same Nutanix one click simplicity all across my data center. And that is a promise that we see, can we bring all of the AHV goodness that abstracts the underlying platform no matter whether you're running on x86, or your cognitive applications, or your mission critical applications on IBM power. You know, it's a fantastic thing for a joint customer. >> So Stephanie come on, couldn't you reach somewhere into the IBM portfolio and pull out a hyper converged, you know, solution? Why Nutanix? >> Clients love it. Look what the hyper converged market is doing. It's growing at incredible rates, and clients love Nutanix, right? We see incredible repurchases around Nutanix. Clients buy three, next they buy 10. Those repurchase is a real sign that clients like the experience. Now you can take that experience, and under the same simplicity and elegance right of the Prism platform for clients. You can pull in and choose the infrastructure that's best for your workload. So I look at a single Prism experience, if I'm running a database, I can pull that onto a Power based offering. If I'm running a BDI I can pull that onto an alternative. But I can now with the simplicity of action under Prism, right for clients who love that look and feel, pick the best infrastructure for the workloads you're running, simply. That's the beauty of it. >> Raja, you know, Nutanix is spread beyond the initial platform that you had. You have Supermicro inside, you've got a few OEMs. This one was a little different. Can you bring us inside a little bit? You know, what kind of engineering work had to happen here? And then I want to understand from a workload perspective, it used to be, okay what kind of general purpose? What do you want on Power, and what should you say isn't for power? >> Yeah, yeah, it's actually I think a power to, you know it speaks to the, you know, the power of our engineering teams that the level of abstraction that they were able to sort of imbue into our software. The transition from supporting x86 platforms to making the leap onto Power, it has not been a significant lift from an engineering standpoint. So because the right abstractions were put in from the get go. You know, literally within a matter of mere months, something like six to eight months, we were able to have our software put it onto the IBM power platform. And that is kind of the promise that our customers saw that look, for the first time as they are going through a re-platforming of their data center. They see the power in Nutanix as software to abstract all these different platforms. Now in terms of the applications that, you know, they are hoping to run. I think, you know, we're at the cusp of a big transition. If you look at enterprise applications, you could have framed them as systems of record, and systems of engagement. If you look forward the next 10 years, we'll see this big shift, and this new class of applications around systems of intelligence. And that is what a lot-- >> David: Say that again, systems of-- >> Systems of intelligence, right? And that is where a lot of like IBM Power platform, and the things that the Power architecture provides. You know, things around better GPU capabilities. It's going to drive those applications. So our customers are thinking of running both the classical mission critical applications that IBM is known for, but as well as the more sort of forward leaning cognitive and data analytics driven applications. >> So Stephanie, on one hand I look at this just as an extension of what IBM's done for years with Linux. But why is it more, what's it going to accelerate from your customers and what applications that they want to deploy? >> So first, one of the additional reasons Nutanix was key to us is they support the Acropolis platform, which is KVM based. Very much supports our focus on being open around our playing in the Linux space, playing in the KVM space, supporting open. So now as you've seen, throughout since we launched POWER8 back in early 2014 we went Little Endian. We've been very focused on getting a strategic set of ISV's ported to the platform. Right, Hortonworks, MongoDB, EnterpriseDB. Now it's about being able to take the value propositions that we have and, you know, we're pretty bullish on our value propositions. We have a two x price performance guarantee on MongoDB that runs better on Power than it runs on the alternative competition. So we're pretty bullish. Now for clients who have taken a stance that their data center will be a hyper converged data center because they like the simplicity of it. Now they can pull in that value in a seamless way. To me it's really all about compatibility. Pick the best architecture, and all compatible within your data center. >> So you talked about, six to eight months you were able to do the integration. Was that Open Power that allowed you to do that, was it Little Endian, you know, advancements? >> I think it was a combination of both, right? We have done a lot from our Linux side to be compatible within the broad Linux ecosystem particularly around KVM. That was critical for this integration into Acropolis. So we've done a lot from the bottoms up to be, you know, Linux is Linux is Linux. And just as Raja said, right, they've done a lot in their platform to be able to abstract from the underlying and provide a seamless experience that, you know, I think you guys used the term invisible infrastructure, right? The experience to the client is simple, right? And in a simple way, pick the best, right for the workload I run. >> You talked about systems of intelligence. Bob Picciano a lot of times would talk about the insight economy. And so we're, you're right we have the systems of records, systems of engagement. Systems of intelligence, let's talk about those workloads a little bit. I infer from that, that you're essentially basically affecting outcomes, while the transaction is occurring. Maybe it's bringing transactions in analytics together. And doing so in a fashion that maybe humans aren't as involved. Maybe they're not involved at all. What do you mean by systems of intelligence, and how do your joint solutions address those? >> Yeah so, you know, one way to look at it is, I mean, so far if you look at how, sort of decisions are made and insights are gathered. It's we look at data, and between a combination of mostly, you know we try to get structured data, and then we try to draw inferences from it. And mostly it's human beings drawing the inferences. If you look at the promise of technologies like machine learning and deep learning. It is precisely that you can throw unstructured data where no patterns are obvious, and software will find patterns there in. And what we mean by systems of intelligence is imagine you're going through your business, and literally hundreds of terabytes of your transactional data is flowing through a system. The software will be able to come up with insights that would be very hard for human beings to otherwise kind of, you know infer, right? So that's one dimension, and it speaks to kind of the fact that there needs to be a more real time aspect to that sort of system. >> Is part of your strategy to drive specific solutions, I mean integrating certain IBM software on Power, or are you sort of stepping back and say, okay customers do whatever you want. Maybe you can talk about that. >> No we're very keen to take this up to a solution value level, right? We have architected our ISV strategy. We have architected our software strategy for this space, right? It is all around the cognitive workloads that we're focused on. But it's about not just being a platform and an infrastructure platform, it's about being able to bring that solution level above and target it. So when a client runs that workload they know this is the infrastructure they should put it on. >> What's the impact on the go to market then for that offering? >> So from a solutions level or when the-- >> Just how you know it's more complicated than the traditional, okay here is your platform for infrastructure. You know, what channel, maybe it's a question for Raja, but yeah. >> Yeah sure, so clearly, you know, the product will be sold by, you know, the community of Nutanix's channel partners as well as IBM's channels partners, right? So, and, you know, we'll both make the appropriate investments to make sure that the, you know, the daughter channel community is enabled around how they essentially talk about the value proposition of the solution in front of our joint customers. >> Alright we have to leave there, Stephanie, Raja, thanks so much for coming back in theCUBE. It's great to see you guys. >> Raja: Thank you. >> Stephanie: Great to see you both, thank you. >> Alright keep it right there everybody we'll be back with our next guest we're live from D.C. Nutanix dot next, be right back. (electronic music)
SUMMARY :
Brought to you by Nutanix. Great to see you guys again. Thanks for having us. so Stephanie I'm excited about you guys getting So you have Caffe, Torch, TensorFlow, You guys talk, you guys talked to the analysts this morning a lot of the, you know, the cognitive applications. for the workloads you're running, simply. beyond the initial platform that you had. Now in terms of the applications that, you know, and the things that the Power architecture provides. So Stephanie, on one hand I look at this just as that we have and, you know, Was that Open Power that allowed you to do that, to be, you know, Linux is Linux is Linux. What do you mean by systems of intelligence, It is precisely that you can throw unstructured data or are you sort of stepping back and say, It is all around the cognitive workloads Just how you know it's more complicated the appropriate investments to make sure that the, you know, It's great to see you guys. you both, thank you. Alright keep it right there everybody
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Raja Mukhopadhyay | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Stephanie Chiras | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Bob Picciano | PERSON | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
$15 billion | QUANTITY | 0.99+ |
Raja | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Washington D.C. | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
IBM Cognitive Systems | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
early 2014 | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
10 | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
two | QUANTITY | 0.97+ |
IBM Power Systems Offering Management | ORGANIZATION | 0.96+ |
hundreds of terabytes | QUANTITY | 0.95+ |
#NEXTconf | EVENT | 0.95+ |
Prism | ORGANIZATION | 0.95+ |
single | QUANTITY | 0.94+ |
MongoDB | TITLE | 0.94+ |
Supermicro | ORGANIZATION | 0.93+ |
Hortonworks | ORGANIZATION | 0.93+ |
Vice President | PERSON | 0.92+ |
one way | QUANTITY | 0.92+ |
Senior Vice President | PERSON | 0.86+ |
POWER8 | TITLE | 0.86+ |
next 10 years | DATE | 0.86+ |
NEXTconf | EVENT | 0.83+ |
this morning | DATE | 0.83+ |
one dimension | QUANTITY | 0.79+ |
Acropolis | ORGANIZATION | 0.79+ |
x86 | QUANTITY | 0.75+ |
NVLink | OTHER | 0.74+ |
Endian | ORGANIZATION | 0.73+ |
EnterpriseDB | TITLE | 0.73+ |
VP | PERSON | 0.68+ |
Carlos Carrero, Veritas - OpenStack Summit 2017 - #OpenStackSummit - #theCUBE
>> Narrator: Live from Boston, Massachusetts, it's the Cube covering OpenStack Summit 2017. Brought to you by the OpenStack foundation, RedHat, and additional ecosystem support. >> Hi. I'm Stu Miniman here with my cohost John Troyer. Happy to welcome to the program to the program, Carlos Carrera, who's a senior principal product manager with Veritas. Carlos, great to see you. >> Yeah, thank you very much. >> Stu: Alright. >> Great to be here. >> So, so many of the things we talk to here in OpenStack and the Cloud World, is relatively short-lived. The average lifetime of the average Cloud deployment, is like 1.7 years. You've been at Veritas at little bit longer with that, had an opportunity to have a conversation with you about some of your history, so we're going to have to take the abbreviated format of that, but give us a little bit about, you know, your time at Veritas, some of the ebbs and flows of your career. >> Yeah, well, again, thank you for having me here. It's great. Having 16 years with Veritas, as I mentioned before to you, you know, back in 1994, 1995 we created the first file system and volume manager, right. A lot of things happened since then, right. At that point in time, the software defined storage store was not yet there. Back, many years ago, we got some piece of software, running on top of any kind of hardware and we were able to help customers to move workloads from one place to another. In a very agnostic point of view, right. And then we move into clouds and now, three years ago, we started looking into what do we do with OpenStack clouds, because this is going to define... It's going to need something very new, something different. So today, this week, we are very happy because we finally announced hyper scale for open stack, which is a software defined storage solution that has been built for an OpenStack clouds. >> When I look at the industry these days, the term lately is storage services. How we're doing things in software more, open stack is the open source infrastructure piece. You guys are the hipster player in this space. You were doing software defined storage and software services not attached to everything else beforehand so it sounds like openstack's a natural fit. Tell us a little bit more about how Veritas fits into that. >> Well, I think that again, it was a perfect fit but we had to review what we was doing. Okay, because again, I've been many years... I was working with traditional legacy architectures in the past. We had to work class defined system that today can work with 128 notes. But we revisit... Is this what we really need to the new OpenStack clouds, are they going to scale? And as you said is that what I need the storage services. So what do we have to rethink? What do we have to do to provide those storage services to the OpenStack clouds? So three years ago, we had this, we call open flame project that today is Hyperscale. It has been building from scratch. New product, what we call emerging product at Veritas, and finally we got separated from Semantec, and we got all the visibility on the storage gain. And using all the knowhow that we have in history, as I say, we're a very big startup, right? But now, emerging with new products, we need new solutions that have been designed for OpenStack from scratch. >> Could you drill down on the product itself? Is this file block object storage? Is this sitting on top of servers. Laid off in a server-based way? How does it interact with OpenStack drivers? That sort of thing. >> Yeah, that's a good question. So it is senior storage. What we provide is block storage for OpenStack. Something key, it is based on commodity hardware of your choice, so you decided what is the hardware that you want to use. Really, it's 86 servers that you can choose in the market, whatever you want. And one of the key differentiators is that we provide block storage, but we separate the compute plane and the data plane. And this is an architectural decision we had to take three years ago. We said we cannot scale, we cannot provide the storage services that you need in a single layer of storage. Because that is what most of the software defined storage solutions on the market are doing today. And then they're having problems with things like noisy neighbor. They have problems with things like the scalability, like the quality of service, and of course they're having problems with protection. How do I protect my cloud environments with OpenStack? And we as a net pack of company, we have our leading net backup solution, we hear that from our customers. That it is not that we're bringing another solution that is going to bring another noisy neighborhood, so we really have to separate two layers. Compute plane, where you have your first copy, and the data plane, where you use cheaper and deeper storage to keep the second, third copy, and do all the data mining operations. >> That's interesting what you just said there too. Two copies, so you do have a copy that's close to the compute. But then you have another. >> Correct. Because, again, if you take a look to what you have in the market, typically it's one-size-fits-all. So, do you need three copies for everything? And today, you have emerging technologies. You can have things like mySQL, where you need high performance, or you can have things like Cassandra where you need nine copies of them, because the application itself is giving you the resiliency. So if you use a standard solution that for each OpenStack instance, you have three copies, that means you have three copies, three copies, three copies. So nine copies. And it's not only the number of copies. It's that when you make a write, you're writing nine times. And you're writing on the single layer. So we said, we have to separate that. The first thing is that what is the workload? Stop thinking about the storage. Stop thinking this is a pool of SSDs or a pool of HCDs, and then start thinking about the workload. And then we connected that very well with OpenStack because OpenStack, you have the definition of flavors, right? That is how many CPUs do you need? How much memory? But also we extend those flavors to say what do you need in terms of storage? What is the resiliency level that you need? What is the number of copies? What is the minimum performance that you need? What is the maximum performance? It's not only about solving the noisy neighbor with the maximum performance? About limiting, it's about guaranteeing that you are going to have a minimum number of IOs per second. At the end, what you can get, you can have a mySQL running with high performance needs with web servers of the same box without fighting each other. >> Carlos, can you speak a little bit about how customers consume this, how do they buy it, how's it priced? How do you get it to market? We've taught before with Veritas. Storage used to always be in an appliance or an array or things like that and the software cloud world's a little bit differently. How does that fit? >> So today's software only? So you make that decision about what hardware to use. We try to simplify the go to market model. So it's based on subscription. You just pay for the max capacity that you have. And you only pay for what you have at the compute plane. So I think a simple model that we could find to go in the open source projects, and being able to attach to that. >> Okay, could you speak to... When you talk about go to market from a partnership standpoint, it's a big market out there. Veritas, well-known name for many years but what partners are involved in this? Any certifications that are needed? We're working with our typical partners that have some expertise with OpenStack and helping with them. We are now also working with hardware providers. We are working with Supermicro and creating reference architectures with them. So we can have at the end, we have to explain to the customers what they can get from different hardware. So we're working with them. And we're also working with new partners. For example, yesterday with us on the stage, we have Verbanks. Verbanks is an OpenStack ambassador in Netherlands. They have been working with us from the very beginning of the project, on the validation. They understand OpenStack. They understand the issues and they have been doing all the validation with us about, yes guys, this is the right thing. You have to do it from the very beginning. Is this product tuned specifically for OpenStack or will it be available for other kind of private cloud applications. >> We have available for OpenStack, we're going to have it. We'll announce, I think we'll watch with you also, guys, we announced the beta version for Containers. At the end, it's the same thing. It's how do you provide persistent storage for Containers? Ninety percent of the product is all the same. It's that compute plane. It's the data plane. How can I protect my workload from the data plane? Because again, it doesn't matter if it's Container. If it's OpenStack, when I have to protect it, how do I do it? How can I read my data without affecting the performance? And that's where we have the value with the data plane. And, of course, our integration with net backup, our leader of backup solutions in the market, where just with a single click, I'm going to connect OpenStack with NetBackup, and define how my workloads are going to be protected, when and how? >> Here at the show, OpenStack Summit, how has it been working with the community? Sometimes, in the open source world, vendors have to have a certain kind of conversation with that open source community to show that they understand their needs and what they need out of the relationship. How has the week been then? >> So yeah, that's a very good question. And that goes to something that we want to announce hopefully at the end of the year. The first version that we announced this week is based on canonical Ubuntu OpenStack. At the end of the year, we are going to have RedHat, and in our DNA is to be agnostic to the pass, any hardware. And of course now, it's any kind of OpenStack distribution. So we will work with any of them. And something that we want to announce at the end of the year is to have a community edition, for Hyperscale. So again, that is our offering to the community. They can both provide-- >> And would that community edition itself be open source, or just available for the community? >> It would be available for that. >> John: For the community. >> We keep our IP. >> Great. As we get towards the end of the event, I'm sure you've had plenty of interesting customer conversations. Any one, I'm sure you can't mention names, but any interesting anecdote or just a general feel of the community? >> I feel that my anecdote for yesterday, when I had to work presentation, we had a customer on the room. We had been working on a POC with them. We have been very, very helpful customer. We finished. "Do you have any questions?" This guys stands up, went to the microphone and I was thinking, what is he going to ask? He knows everything about the product. And he said, he guys, you are doing the right thing. This is great. I'm fantastic, you are bringing a lot of value here. So I was like, wow. >> In my understanding, it was a big brand name customer who actually said where he was from, which is great validation, something we've heard all week is there's that sharing here with the community, so financial companies who, in the past, wouldn't have done that, TelCos who do that in the past, great to see. Give me the final word, Carlos. >> Yeah, the thing, again, is as you said validation is a key thing. I've been a lot of years in the company. I got this project eight months ago, and all the things I've been doing is validation, talking to customers to I don't know how many analysts I've been talking to in this week. And I love Dan said, yeah, you guys are doing the right thing. This is that direction that we have to move, so happy that finally, emerging again from Veritas, being back here with the community on OpenStack. >> Well, the speed of change, constant learning on new things and helping customers move forward. Big theme we've seen in the show. Carlos Carrera. I appreciate you joining us here. For John and Stu, thanks for watching The Cube here at OpenStack Summit. (mid-tempo electronic music)
SUMMARY :
Brought to you by the OpenStack foundation, Carlos, great to see you. had an opportunity to have a conversation with you And then we move into clouds You guys are the hipster player in this space. And as you said is that what I need the storage services. Could you drill down on the product itself? and the data plane, where you use cheaper That's interesting what you just said there too. What is the resiliency level that you need? and the software cloud world's a little bit differently. You just pay for the max capacity that you have. of the project, on the validation. We'll announce, I think we'll watch with you Sometimes, in the open source world, And that goes to something that we want to announce of the community? "Do you have any questions?" Give me the final word, Carlos. This is that direction that we have to move, I appreciate you joining us here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Carlos Carrera | PERSON | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Carlos | PERSON | 0.99+ |
nine copies | QUANTITY | 0.99+ |
1994 | DATE | 0.99+ |
Verbanks | ORGANIZATION | 0.99+ |
Dan | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
first copy | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
nine times | QUANTITY | 0.99+ |
Two copies | QUANTITY | 0.99+ |
Semantec | ORGANIZATION | 0.99+ |
three copies | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Netherlands | LOCATION | 0.99+ |
86 servers | QUANTITY | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
16 years | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
third copy | QUANTITY | 0.99+ |
Ninety percent | QUANTITY | 0.99+ |
1.7 years | QUANTITY | 0.99+ |
Supermicro | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
this week | DATE | 0.99+ |
two layers | QUANTITY | 0.99+ |
RedHat | ORGANIZATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
eight months ago | DATE | 0.99+ |
three years ago | DATE | 0.99+ |
TelCos | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
1995 | DATE | 0.98+ |
first version | QUANTITY | 0.98+ |
OpenStack Summit | EVENT | 0.98+ |
OpenStack | TITLE | 0.98+ |
mySQL | TITLE | 0.98+ |
single layer | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
many years ago | DATE | 0.97+ |
OpenStack Summit 2017 | EVENT | 0.97+ |
#OpenStackSummit | EVENT | 0.97+ |
Carlos Carrero | PERSON | 0.96+ |
first thing | QUANTITY | 0.96+ |
open stack | TITLE | 0.95+ |
128 notes | QUANTITY | 0.94+ |
RedHat | TITLE | 0.93+ |
Ubuntu OpenStack | TITLE | 0.92+ |
first file | QUANTITY | 0.92+ |
Wikibon Research Meeting
>> Dave: The cloud. There you go. I presume that worked. >> David: Hi there. >> Dave: Hi David. We had agreed, Peter and I had talked and we said let's just pick three topics, allocate enough time. Maybe a half hour each, and then maybe a little bit longer if we have the time. Then try and structure it so we can gather some opinions on what it all means. Ultimately the goal is to have an outcome with some research that hits the network. The three topics today, Jim Kobeielus is going to present on agile and data science, David Floyer on NVMe over fabric and of course keying off of the Micron news announcement. I think Nick is, is that Nick who just joined? He can contribute to that as well. Then George Gilbert has this concept of digital twin. We'll start with Jim. I guess what I'd suggest is maybe present this in the context of, present a premise or some kind of thesis that you have and maybe the key issues that you see and then kind of guide the conversation and we'll all chime in. >> Jim: Sure, sure. >> Dave: Take it away, Jim. >> Agile development and team data science. Agile methodology obviously is well-established as a paradigm and as a set of practices in various schools in software development in general. Agile is practiced in data science in terms of development, the pipelines. The overall premise for my piece, first of all starting off with a core definition of what agile is as a methodology. Self-organizing, cross-functional teams. They sprint toward results in steps that are fast, iterative, incremental, adaptive and so forth. Specifically the premise here is that agile has already come to data science and is coming even more deeply into the core practice of data science where data science is done in team environment. It's not just unicorns that are producing really work on their own, but more to the point, it's teams of specialists that come together in co-location, increasingly in co-located environments or in co-located settings to produce (banging) weekly check points and so forth. That's the basic premise that I've laid out for the piece. The themes. First of all, the themes, let me break it out. In terms of the overall how I design or how I'm approaching agile in this context is I'm looking at the basic principles of agile. It's really practices that are minimal, modular, incremental, iterative, adaptive, and co-locational. I've laid out how all that maps in to how data science is done in the real world right now in terms of tight teams working in an iterative fashion. A couple of issues that I see as regards to the adoption and sort of the ramifications of agile in a data science context. One of which is a co-location. What we have increasingly are data science teams that are virtual and distributed where a lot of the functions are handled by statistical modelers and data engineers and subject matter experts and visualization specialists that are working remotely from each other and are using collaborative tools like the tools from the company that I just left. How can agile, the co-location work primer for agile stand up in a world with more of the development team learning deeper and so forth is being done on a scrutiny basis and needs to be by teams of specialists that may be in different cities or different time zones, operating around the clock, produce brilliant results? Another one of which is that agile seems to be predicated on the notion that you improvise the process as you go, trial and error which seems to fly in the face of documentation or tidy documentation. Without tidy documentation about how you actually arrived at your results, how come those results can not be easily reproduced by independent researchers, independent data scientists? If you don't have well defined processes for achieving results in a certain data science initiative, it can't be reproduced which means they're not terribly scientific. By definition it's not science if you can't reproduce it by independent teams. To the extent that it's all loosey-goosey and improvised and undocumented, it's not reproducible. If it's not reproducible, to what extent should you put credence in the results of a given data science initiative if it's not been documented? Agile seems to fly in the face of reproducibility of data science results. Those are sort of my core themes or core issues that I'm pondering with or will be. >> Dave: Jim, just a couple questions. You had mentioned, you rattled off a bunch of parameters. You went really fast. One of them was co-location. Can you just review those again? What were they? >> Sure. They are minimal. The minimum viable product is the basis for agile, meaning a team puts together data a complete monolithic sect, but an initial deliverable that can stand alone, provide some value to your stakeholders or users and then you iteratively build upon that in what I call minimum viable product going forward to pull out more complex applications as needed. There's sort of a minimum viable product is at the heart of agile the way it's often looked at. The big question is, what is the minimum viable product in a data science initiative? One way you might approach that is saying that what you're doing, say you're building a predictive model. You're predicting a single scenario, for example such as whether one specific class of customers might accept one specific class of offers under the constraining circumstances. That's an example of minimum outcome to be achieved from a data science deliverable. A minimum product that addresses that requirement might be pulling the data from a single source. We'll need a very simplified feature set of predictive variables like maybe two or three at the most, to predict customer behavior, and use one very well understood algorithm like linear regressions and do it. With just a few lines of programming code in Python or Aura or whatever and build us some very crisp, simple rules. That's the notion in a data science context of a minimum viable product. That's the foundation of agile. Then there's the notion of modular which I've implied with minimal viable product. The initial product is the foundation upon which you build modular add ons. The add ons might be building out more complex algorithms based on more data sets, using more predictive variables, throwing other algorithms in to the initiative like logistic regression or decision trees to do more fine-grained customer segmentation. What I'm giving you is a sense for the modular add ons and builds on to the initial product that generally weaken incrementally in the course of a data science initiative. Then there's this, and I've already used the word incremental where each new module that gets built up or each new feature or tweak on the core model gets added on to the initial deliverable in a way that's incremental. Ideally it should all compose ultimately the sum of the useful set of capabilities that deliver a wider range of value. For example, in a data science initiative where it's customer data, you're doing predictive analysis to identify whether customers are likely to accept a given offer. One way to add on incrementally to that core functionality is to embed that capability, for example, in a target marketing application like an outbound marketing application that uses those predictive variables to drive responses in line to, say an e-commerce front end. Then there's the notion of iterative and iterative really comes down to check points. Regular reviews of the standards and check points where the team comes together to review the work in a context of data science. Data science by its very nature is exploratory. It's visualization, it's model building and testing and training. It's iterative scoring and testing and refinement of the underlying model. Maybe on a daily basis, maybe on a weekly basis, maybe adhoc, but iteration goes on all the time in data science initiatives. Adaptive. Adaptive is all about responding to circumstances. Trial and error. What works, what doesn't work at the level of the clinical approach. It's also in terms of, do we have the right people on this team to deliver on the end results? A data science team might determine mid-way through that, well we're trying to build a marketing application, but we don't have the right marketing expertise in our team, maybe we need to tap Joe over there who seems to know a little bit about this particular application we're trying to build and this particular scenario, this particular customers, we're trying to get a good profile of how to reach them. You might adapt by adding, like I said, new data sources, adding on new algorithms, totally changing your approach for future engineering as you go along. In addition to supervised learning from ground troops, you might add some unsupervised learning algorithms to being able to find patterns in say unstructured data sets as you bring those into the picture. What I'm getting at is there's a lot, 10 zillion variables that, for a data science team that you have to add in to your overall research plan going forward based on, what you're trying to derive from data science is its insights. They're actionable and ideally repeatable. That you can embed them in applications. It's just a matter of figuring out what actually helps you, what set of variables and team members and data and sort of what helps you to achieve the goals of your project. Finally, co-locational. It's all about the core team needs to be, usually in the same physical location according to the book how people normally think of agile. The company that I just left is basically doing a massive social engineering exercise, ongoing about making their marketing and R&D teams a little more agile by co-locating them in different cities like San Francisco and Austin and so forth. The whole notion that people will collaborate far better if they're not virtual. That's highly controversial, but none-the-less, that's the foundation of agile as it's normally considered. One of my questions, really an open question is what hard core, you might have a sprawling team that's doing data science, doing various aspects, but what solid core of that team needs to be physically co-located all or most of the time? Is it the statistical modeler and a data engineer alone? The one who stands up how to do cluster and the person who actually does the building and testing of the model? Do the visualization specialists need to be co-located as well? Are other specialties like subject matter experts who have the knowledge in marketing, whatever it is, do they also need to be in the physical location day in, day out, week in and week out to achieve results on these projects? Anyway, so there you go. That's how I sort of appealed the argument of (mumbling). >> Dave: Okay. I got a minimal modular, incremental, iterative, adaptive, co-locational. What was six again? I'm sorry. >> Jim: Co-locational. >> Dave: What was the one before that? >> Jim: I'm sorry. >> Dave: Adaptive. >> Minimal, modular, incremental, iterative, adaptive, and co-locational. >> Dave: Okay, there were only six. Sorry, I thought it was seven. Good. A couple of questions then we can get the discussion going here. Of course, you're talking specifically in the context of data science, but some of the questions that I've seen around agile generally are, it's not for everybody, when and where should it be used? Waterfalls still make sense sometimes. Some of the criticisms I've read, heard, seen, and sometimes experienced with agile are sort of quality issues, I'll call it lack of accountability. I don't know if that's the right terminology. We're going for speed so as long as we're fast, we checked that box, quality can sacrifice. Thoughts on that. Where does it fit and again understanding specifically you're talking about data science. Does it always fit in data science or because it's so new and hip and cool or like traditional programming environments, is it horses for courses? >> David: Can I add to that, Dave? It's a great, fundamental question. It seems to me there's two really important aspects of artificial intelligence. The first is the research part of it which is developing the algorithms, developing the potential data sources that might or might not matter. Then the second is taking that and putting it into production. That is that somewhere along the line, it's saving money, time, etc., and it's integrated with the rest of the organization. That second piece is, the first piece it seems to be like most research projects, the ROI is difficult to predict in a new sort of way. The second piece of actually implementing it is where you're going to make money. Is agile, if you can integrate that with your systems of record, for example and get automation of many of the aspects that you've researched, is agile the right way of doing it at that stage? How would you bridge the gap between the initial development and then the final instantiation? >> That's an important concern, David. Dev Ops, that's a closely related issue but it's not exactly the same scope. As data science and machine learning, let's just net it out. As machine learning and deep learning get embedded in applications, in operations I should say, like in your e-commerce site or whatever it might be, then data science itself becomes an operational function. The people who continue to iterate those models in line the operational applications. Really, where it comes down to an operational function, everything that these people do needs to be documented and version controlled and so forth. These people meaning data science professionals. You need documentation. You need accountability. The development of these assets, machine learning and so forth, needs to be, is compliance. When you look at compliance, algorithmic accountability comes into it where lawyers will, like e-discovery. They'll subpoena, theoretically all your algorithms and data and say explain how you arrived at this particular recommendation that you made to grant somebody or not grant somebody a loan or whatever it might be. The transparency of the entire development process is absolutely essential to the data science process downstream and when it's a production application. In many ways, agile by saying, speed's the most important thing. Screw documentation, you can sort of figure that out and that's not as important, that whole pathos, it goes by the wayside. Agile can not, should not skip on documentation. Documentation is even more important as data science becomes an operational function. That's one of my concerns. >> David: I think it seems to me that the whole rapid idea development is difficult to get a combination of that and operational, boring testing, regression testing, etc. The two worlds are very different. The interface between the two is difficult. >> Everybody does their e-commerce tweaks through AB testing of different layouts and so forth. AB testing is fundamentally data science and so it's an ongoing thing. (static) ... On AB testing in terms of tweaking. All these channels and all the service flow, systems of engagement and so forth. All this stuff has to be documented so agile sort of, in many ways flies in the face of that or potentially compromises the visibility of (garbled) access. >> David: Right. If you're thinking about IOT for example, you've got very expensive machines out there in the field which you're trying to optimize true put through and trying to minimize machine's breaking, etc. At the Micron event, it was interesting that Micron's use of different methodologies of putting systems together, they were focusing on the data analysis, etc., to drive greater efficiency through their manufacturing process. Having said that, they need really, really tested algorithms, etc. to make sure there isn't a major (mumbling) or loss of huge amounts of potential revenue if something goes wrong. I'm just interested in how you would create the final product that has to go into production in a very high value chain like an IOT. >> When you're running, say AI from learning algorithms all the way down to the end points, it gets even trickier than simply documenting the data and feature sets and the algorithms and so forth that were used to build up these models. It also comes down to having to document the entire life cycle in terms of how these algorithms were trained to make the predictors of whatever it is you're trying to do at the edge with a particular algorithm. The whole notion of how are all of these edge points applications being trained, with what data, at what interval? Are they being retrained on a daily basis, hourly basis, moment by moment basis? All of those are critical concerns to know whether they're making the best automated decisions or actions possible in all scenarios. That's like a black box in terms of the sheer complexity of what needs to be logged to figure out whether the application is doing its job as best a possible. You need a massive log, you need a massive event log from end to end of the IOT to do that right and to provide that visibility ongoing into the performance of these AI driven edge devices. I don't know anybody who's providing the tool to do it. >> David: If I think about how it's done at the moment, it's obviously far too slow at the moment. At the same time, you've got to have some testing and things like that. It seems to me that you've got a research model on one side and then you need to create a working model from that which is your production model. That's the one that goes through the testing and everything of that sort. It seems to me that the interface would be that transition from the research model to the working model that would be critical here and the working model is obviously a subset and it's going to be optimized for performance, etc. in real time, as opposed to the development model which can be a lot to do and take half a week to manage it necessary. It seems to me that you've got a different set of business pressures on the working model and a different set of skills as well. I think having one team here doesn't sound right to me. You've got to have a Dev Ops team who are going to take the working model from the developers and then make sure that it's sound and save. Especially in a high value IOT area that the level of iteration is not going to be nearly as high as in a lower cost marketing type application. Does that sound sensible? >> That sounds sensible. In fact in Dev Ops, the Dev Ops team would definitely be the ones that handle the continuous training and retraining of the working models on an ongoing basis. That's a core observation. >> David: Is that the right way of doing it, Jim? It seems to me that the research people would be continuing to adapt from data from a lot of different places whereas the operational model would be at a specific location with a specific IOT and they wouldn't have necessarily all the data there to do that. I'm not quite sure whether - >> Dave: Hey guys? Hey guys, hey guys? Can I jump in here? Interesting discussion, but highly nuanced and I'm struggling to figure out how this turns into a piece or sort of debating some certain specifics that are very kind of weedy. I wonder if we could just reset for a second and come back to sort of what I was trying to get to before which is really the business impact. Should this be applied broadly? Should this be applied specifically? What does it mean if I'm a practitioner? What should I take away from, Jim your premise and your sort of fixed parameters? Should I be implementing this? Why? Where? What's the value to my organization - the value I guess is obvious, but does it fit everywhere? Should it be across the board? Can you address that? >> Neil: Can I jump in here for a second? >> Dave: Please, that would be great. Is that Neil? >> Neil: Neil. I've never been a data scientist, but I was an actuary a long time ago. When the truth actuary came to me and said we need to develop a liability insurance coverage for floating oil rigs in the North Sea, I'm serious, it took a couple of months of research and modeling and so forth. If I had to go to all of those meetings and stand ups in an agile development environment, I probably would have gone postal on the place. I think that there's some confusion about what data science is. It's not a vector. It's not like a Dev Op situation where you start with something and you go (mumbling). When a data scientist or whatever you want to call them comes up with a model, that model has to be constantly revisited until it's put out of business. It's refined, it's evaluated. It doesn't have an end point like that. The other thing is that data scientist is typically going to be running multiple projects simultaneously so how in the world are you going to agilize that? I think if you look at the data science group, they're probably, I think Nick said this, there are probably groups in there that are doing fewer Dev Ops, software engineering and so forth and you can apply agile techniques to them. The whole data science thing is too squishy for that, in my opinion. >> Jim: Squishy? What do you mean by squishy, Neil? >> Neil: It's not one thing. I think if you try to represent data science as here's a project, we gather data, we work on a model, we test it, and then we put it into production, it doesn't end there. It never ends. It's constantly being revised. >> Yeah, of course. It's akin to application maintenance. The application meaning the model, the algorithm to be fit for purpose has to continually be evaluated, possibly tweaked, always retrained to determine its predictive fit for whatever task it's been assigned. You don't build it once and assume its strong predictive fit forever and ever. You can never assume that. >> Neil: James and I called that adaptive control mechanisms. You put a model out there and you monitor the return you're getting. You talk about AB testing, that's one method of doing it. I think that a data scientist, somebody who really is keyed into the machine learning and all that jazz. I just don't see them as being project oriented. I'll tell you one other thing, I have a son who's a software engineer and he said something to me the other day. He said, "Agile? Agile's dead." I haven't had a chance to find out what he meant by that. I'll get back to you. >> Oh, okay. If you look at - Go ahead. >> Dave: I'm sorry, Neil. Just to clarify, he said agile's dead? Was that what he said? >> Neil: I didn't say it, my son said it. >> Dave: Yeah, yeah, yeah right. >> Neil: No idea what he was talking about. >> Dave: Go ahead, Jim. Sorry. >> If you look at waterfall development in general, for larger projects it's absolutely essential to get requirements nailed down and the functional specifications and all that. Where you have some very extensive projects and many moving parts, obviously you need a master plan that it all fits into and waterfall, those checkpoints and so forth, those controls that are built into that methodology are critically important. Within the context of a broad project, some of the assets being build up might be machine loading models and analytics models and so forth so in the context of our broader waterfall oriented software development initiative, you might need to have multiple data science projects spun off within the sub-projects. Each of those would fit into, by itself might be indicated sort of like an exploration task where you have a team doing data visualization, exploration in more of an open-ended fashion because while they're trying to figure out the right set of predictors and the right set of data to be able to build out the right model to deliver the right result. What I'm getting at is that agile approaches might be embedded into broader waterfall oriented development initiatives, agile data science approaches. Fundamentally, data science began and still is predominantly very smart people, PhDs in statistics and math, doing open-ended exploration of complex data looking for non-obvious patterns that you wouldn't be able to find otherwise. Sort of a fishing expedition, a high priced fishing expedition. Kind of a mode of operation as how data science often is conducted in the real world. Looking for that eureka moment when the correlations just jump out at you. There's a lot of that that goes on. A lot of that is very important data science, it's more akin to pure science. What I'm getting at is there might be some role for more structure in waterfall development approaches in projects that have a data science, core data science capability to them. Those are my thoughts. >> Dave: Okay, we probably should move on to the next topic here, but just in closing can we get people to chime in on sort of the bottom line here? If you're writing to an audience of data scientists or data scientist want to be's, what's the one piece of advice or a couple of pieces of advice that you would give them? >> First of all, data science is a developer competency. The modern developers are, many of them need to be data scientists or have a strong grounding and understanding of data science, because much of that machine learning and all that is increasingly the core of what software developers are building so you can't not understand data science if you're a modern software developer. You can't understand data science as it (garbled) if you don't understand the need for agile iterative steps within the, because they're looking for the needle in the haystack quite often. The right combination of predictive variables and the right combination of algorithms and the right training regimen in order to get it all fit. It's a new world competency that need be mastered if you're a software development professional. >> Dave: Okay, anybody else want to chime in on the bottom line there? >> David: Just my two penny worth is that the key aspect of all the data scientists is to come up with the algorithm and then implement them in a way that is robust and it part of the system as a whole. The return on investment on the data science piece as an insight isn't worth anything until it's actually implemented and put into production of some sort. It seems that second stage of creating the working model is what is the output of your data scientists. >> Yeah, it's the repeatable deployable asset that incorporates the crux of data science which is algorithms that are data driven, statistical algorithms that are data driven. >> Dave: Okay. If there's nothing else, let's close this agenda item out. Is Nick on? Did Nick join us today? Nick, you there? >> Nick: Yeah. >> Dave: Sounds like you're on. Tough to hear you. >> Nick: How's that? >> Dave: Better, but still not great. Okay, we can at least hear you now. David, you wanted to present on NVMe over fabric pivoting off the Micron news. What is NVMe over fabric and who gives a fuck? (laughing) >> David: This is Micron, we talked about it last week. This is Micron announcement. What they announced is NVMe over fabric which, last time we talked about is the ability to create a whole number of nodes. They've tested 250, the architecture will take them to 1,000. 1,000 processor or 1,000 nodes, and be able to access the data on any single node at roughly the same speed. They are quoting 200 microseconds. It's 195 if it's local and it's 200 if it's remote. That is a very, very interesting architecture which is like nothing else that's been announced. >> Participant: David, can I ask a quick question? >> David: Sure. >> Participant: This latency and the node count sounds astonishing. Is Intel not replicating this or challenging in scope with their 3D Crosspoint? >> David: 3D Crosspoint, Intel would love to sell that as a key component of this. The 3D Crosspoint as a storage device is very, very, very expensive. You can replicate most of the function of 3D Crosspoint at a much lower price point by using a combination of D-RAM and protective D-RAM and Flash. At the moment, 3D Crosspoint is a nice to have and there'll be circumstances where they will use it, but at the meeting yesterday, I don't think they, they might have brought it up once. They didn't emphasize it (mumbles) at all as being part of it. >> Participant: To be clear, this means rather than buying Intel servers rounded out with lots of 3D Crosspoint, you buy Intel servers just with the CPU and then all the Micron niceness for their NVMe and their Interconnect? >> David: Correct. They are still Intel servers. The ones they were displaying yesterday were HP1's, they also used SuperMicro. They want certain characteristics of the chip set that are used, but those are just standard pieces. The other parts of the architecture are the Mellanox, the 100 gigabit converged ethernet and using Rocky which is IDMA over converged ethernet. That is the secret sauce which allows you and Mellanox themselves, their cards have a lot of offload of a lot of functionality. That's the secret sauce which allows you to go from any point to any point in 5 microseconds. Then create a transfer and other things. Files are on top of that. >> Participant: David, Another quick question. The latency is incredibly short. >> David: Yep. >> Participant: What happens if, as say an MPP SQL database with 1,000 nodes, what if they have to shuffle a lot of data? What's the throughput? Is it limited by that 100 gig or is that so insanely large that it doesn't matter? >> David: They key is this, that it allows you to move the processing to wherever the data is very, very easily. In the principle that will evolve from this architecture, is that you know where the data is so don't move the data around, that'll block things up. Move the processing to that particular node or some adjacent node and do the processing as close as possible. That is as an architecture is a long term goal. Obviously in the short term, you've got to take things as they are. Clearly, a different type of architecture for databases will need to eventually evolve out of this. At the moment, what they're focusing on is big problems which need low latency solutions and using databases as they are and the whole end to end use stack which is a much faster way of doing it. Then over time, they'll adapt new databases, new architectures to really take advantage of it. What they're offering is a POC at the moment. It's in Beta. They had their customers talking about it and they were very complimentary in general about it. They hope to get it into full production this year. There's going to be a host of other people that are doing this. I was trying to bottom line this in terms of really what the link is with digital enablement. For me, true digital enablement is enabling any relevant data to be available for processing at the point of business engagement in real time or near real time. The definition that this architecture enables. It's a, in my view a potential game changer in that this is an architecture which will allow any data to be available for processing. You don't have to move the data around, you move the processing to that data. >> Is Micron the first market with this capability, David? NV over Me? NVMe. >> David: Over fabric? Yes. >> Jim: Okay. >> David: Having said that, there are a lot of start ups which have got a significant amount of money and who are coming to market with their own versions. You would expect Dell, HP to be following suit. >> Dave: David? Sorry. Finish your thought and then I have another quick question. >> David: No, no. >> Dave: The principle, and you've helped me understand this many times, going all the way back to Hadoop, bring the application to the data, but when you're using conventional relational databases and you've had it all normalized, you've got to join stuff that might not be co-located. >> David: Yep. That's the whole point about the five microseconds. Now that the impact of non co-location if you have to join stuff or whatever it is, is much, much lower. It's so you can do the logical draw in, whatever it is, very quickly and very easily across that whole fabric. In terms of processing against that data, then you would choose to move the application to that node because it's much less data to move, that's an optimization of the architecture as opposed to a fundamental design point. You can then optimize about where you run the thing. This is ideal architecture for where I personally see things going which is traditional systems of record which need to be exactly as they've ever been and then alongside it, the artificial intelligence, the systems of understanding, data warehouses, etc. Having that data available in the same space so that you can combine those two elements in real time or in near real time. The advantage of that in terms of business value, digital enablement, and business value is the biggest thing of all. That's a 50% improvement in overall productivity of a company, that's the thing that will drive, in my view, 99% of the business value. >> Dave: Going back just to the joint thing, 100 gigs with five microseconds, that's really, really fast, but if you've got petabytes of data on these thousand nodes and you have to do a join, you still got to go through that 100 gig pipe of stuff that's not co-located. >> David: Absolutely. The way you would design that is as you would design any query. You've got a process you would need, a process in front of that which is query optimization to be able to farm all of the independent jobs needed to do in each of the nodes and take the output of that and bring that together. Both the concepts are already there. >> Dave: Like a map. >> David: Yes. That's right. All of the data science is there. You're starting from an architecture which is fundamentally different from the traditional let's get it out architectures that have existed, by removing that huge overhead of going from one to another. >> Dave: Oh, because this goes, it's like a mesh not a ring? >> David: Yes, yes. >> Dave: It's like the high performance compute of this MPI type architecture? >> David: Absolutely. NVMe, by definition is a point to point architecture. Rocky, underneath it is a point to point architecture. Everything is point to point. Yes. >> Dave: Oh, got it. That really does call for a redesign. >> David: Yes, you can take it in steps. It'll work as it is and then over time you'll optimize it to take advantage of it more. Does that definition of (mumbling) make sense to you guys? The one I quoted to you? Enabling any relevant data to be available for processing at the point of business engagement, in real time or near real time? That's where you're trying to get to and this is a very powerful enabler of that design. >> Nick: You're emphasizing the network topology, while I kind of thought the heart of the argument was performance. >> David: Could you repeat that? It's very - >> Dave: Let me repeat. Nick's a little light, but I could hear him fine. You're emphasizing the network topology, but Nick's saying his takeaway was the whole idea was the thrust was performance. >> Nick: Correct. >> David: Absolutely. Absolutely. The result of that network topology is a many times improvement in performance of the systems as a whole that you couldn't achieve in any previous architecture. I totally agree. That's what it's about is enabling low latency applications with much, much more data available by being able to break things up in parallel and delivering multiple streams to an end result. Yes. >> Participant: David, let me just ask, if I can play out how databases are designed now, how they can take advantage of it unmodified, but how things could be very, very different once they do take advantage of it which is that today, if you're doing transaction processing, you're pretty much bottle necked on a single node that sort of maintains the fresh cache of shared data and that cache, even if it's in memory, it's associated with shared storage. What you're talking about means because you've got memory speed access to that cache from anywhere, it no longer is tied to a node. That's what allows you to scale out to 1,000 nodes even for transaction processing. That's something we've never really been able to do. Then the fact that you have a large memory space means that you no longer optimize for mapping back and forth from disk and disk structures, but you have everything in a memory native structure and you don't go through this thing straw for IO to storage, you go through memory speed IO. That's a big, big - >> David: That's the end point. I agree. That's not here quite yet. It's still IO, so the IO has been improved dramatically, the protocol within the Me and the over fabric part of it. The elapsed time has been improved, but it's not yet the same as, for example, the HPV initiative. That's saying you change your architecture, you change your way of processing just in the memory. Everything is assumed to be memory. We're not there yet. 200 microseconds is still a lot, lot slower than the process that - one impact of this architecture is that the amount of data that you can pass through it is enormously higher and therefore, the memory sizes themselves within each node will need to be much, much bigger. There is a real opportunity for architectures which minimize the impact, which hold data coherently across multiple nodes and where there's minimal impact of, no tapping on the shoulder for every byte transferred so you can move large amounts of data into memory and then tell people that it's there and allow it to be shared, for example between the different calls and the GPUs and FPGAs that will be in these processes. There's more to come in terms of the architecture in the future. This is a step along the way, it's not the whole journey. >> Participant: Dave, another question. You just referenced 200 milliseconds or microseconds? >> David: Did I say milliseconds? I meant microseconds. >> Participant: You might have, I might have misheard. Relate that to the five microsecond thing again. >> David: If you have data directly attached to your processor, the access time is 195 microseconds. If you need to go to a remote, anywhere else in the thousand nodes, your access time is 200 microseconds. In other words, the additional overhead of that data is five microseconds. >> Participant: That's incredible. >> David: Yes, yes. That is absolutely incredible. That's something that data scientists have been working on for years and years. Okay. That's the reason why you can now do what I talked about which was you can have access from any node to any data within that large amount of nodes. You can have petabytes of data there and you can have access from any single node to any of that data. That, in terms of data enablement, digital enablement, is absolutely amazing. In other words, you don't have to pre put the data that's local in one application in one place. You're allowing an enormous flexibility in how you design systems. That coming back to artificial intelligence, etc. allows you a much, much larger amount of data that you can call on for improving applications. >> Participant: You can explore and train models, huge models, really quickly? >> David: Yes, yes. >> Participant: Apparently that process works better when you have an MPI like mesh than a ring. >> David: If you compare this architecture to the DSST architecture which was the first entrance into this that MP bought for a billion dollars, then that one stopped at 40 nodes. It's architecture was very, very proprietary all the way through. This one takes you to 1,000 nodes with much, much lower cost. They believe that the cost of the equivalent DSSD system will be between 10 and 20% of that cost. >> Dave: Can I ask a question about, you mentioned query optimizer. Who develops the query optimizer for the system? >> David: Nobody does yet. >> Jim: The DBMS vendor would have to re-write theirs with a whole different pensive cost. >> Dave: So we would have an optimizer database system? >> David: Who's asking a question, I'm sorry. I don't recognize the voice. >> Dave: That was Neil. Hold on one second, David. Hold on one second. Go ahead Nick. You talk about translation. >> Nick: ... On a network. It's SAN. It happens to be very low latency and very high throughput, but it's just a storage sub-system. >> David: Yep. Yep. It's a storage sub-system. It's called a server SAN. That's what we've been talking about for a long time is you need the same characteristics which is that you can get at all the data, but you need to be able to get at it in compute time as opposed to taking a stroll down the road time. >> Dave: Architecturally it's a SAN without an array controller? >> David: Exactly. Yeah, the array controller is software from a company called Xcellate, what was the name of it? I can't remember now. Say it again. >> Nick: Xcelero or Xceleron? >> David: Xcelero. That's the company that has produced the software for the data services, etc. >> Dave: Let's, as we sort of wind down this segment, let's talk about the business impact again. We're talking about different ways potentially to develop applications. There's an ecosystem requirement here it sounds like, from the ISDs to support this and other developers. It's the final, portends the elimination of the last electromechanical device in computing which has implications for a lot of things. Performance value, application development, application capability. Maybe you could talk about that a little bit again thinking in terms of how practitioners should look at this. What are the actions that they should be taking and what kinds of plans should they be making in their strategies? >> David: I thought Neil's comment last week was very perceptive which is, you wouldn't start with people like me who have been imbued with the 100 database call limits for umpteen years. You'd start with people, millennials, or sub-millenials or whatever you want to call them, who can take a completely fresh view of how you would exploit this type of architecture. Fundamentally you will be able to get through 10 or 100 times more data in real time than you can with today's systems. There's two parts of that data as I said before. The traditional systems of record that need to be updated, and then a whole host of applications that will allow you to do processes which are either not possible, or very slow today. To give one simple example, if you want to do real time changing of pricing based on availability of your supply chain, based on what you've got in stock, based on the delivery capabilities, that's a very, very complex problem. The optimization of all these different things and there are many others that you could include in that. This will give you the ability to automate that process and optimize that process in real time as part of the systems of record and update everything together. That, in terms of business value is extracting a huge number of people who previously would be involved in that chain, reducing their involvement significantly and making the company itself far more agile, far more responsive to change in the marketplace. That's just one example, you can think of hundreds for every marketplace where the application now becomes the systems of record, augmented by AI and huge amounts more data can improve the productivity of an organization and the agility of an organization in the marketplace. >> This is a godsend for AI. AI, the draw of AI is all this training data. If you could just move that in memory speed to the application in real time, it makes the applications much sharper and more (mumbling). >> David: Absolutely. >> Participant: How long David, would it take for the cloud vendors to not just offer some instances of this, but essentially to retool their infrastructure. (laughing) >> David: This is, to me a disruption and a half. The people who can be first to market in this are the SaaS vendors who can take their applications or new SaaS vendors. ISV. Sorry, say that again, sorry. >> Participant: The SaaS vendors who have their own infrastructure? >> David: Yes, but it's not going to be long before the AWS' and Microsofts put this in their tool bag. The SaaS vendors have the greatest capability of making this change in the shortest possible time. To me, that's one area where we're going to see results. Make no mistake about it, this is a big change and at the Micron conference, I can't remember what the guys name was, he said it takes two Olympics for people to start adopting things for real. I think that's going to be shorter than two Olympics, but it's going to be quite a slow process for pushing this out. It's radically different and a lot of the traditional ways of doing things are going to be affected. My view is that SaaS is going to be the first and then there are going to be individual companies that solve the problems themselves. Large companies, even small companies that put in systems of this sort and then use it to outperform the marketplace in a significant way. Particularly in the finance area and particularly in other data intent areas. That's my two pennies worth. Anybody want to add anything else? Any other thoughts? >> Dave: Let's wrap some final thoughts on this one. >> Participant: Big deal for big data. >> David: Like it, like it. >> Participant: It's actually more than that because there used to be a major trade off between big data and fast data. Latency and throughput and this starts to push some of those boundaries out so that you sort of can have both at once. >> Dave: Okay, good. Big deal for big data and fast data. >> David: Yeah, I like it. >> Dave: George, you want to talk about digital twins? I remember when you first sort of introduced this, I was like, "Huh? What's a digital twin? "That's an interesting name." I guess, I'm not sure you coined it, but why don't you tell us what digital twin is and why it's relevant. >> George: All right. GE coined it. I'm going to, at a high level talk about what it is, why it's important, and a little bit about as much as we can tell, how it's likely to start playing out and a little bit on the differences of the different vendors who are going after it. As far as sort of defining it, I'm cribbing a little bit from a report that's just in the edit process. It's data representation, this is important, or a model of a product, process, service, customer, supplier. It's not just an industrial device. It can be any entity involved in the business. This is a refinement sort of Peter helped with. The reason it's any entity is because there is, it can represent the structure and behavior, not just of a machine tool or a jet engine, but a business process like sales order process when you see it on a screen and its workflow. That's a digital twin of what used to be a physical process. It applied to both the devices and assets and processes because when you can model them, you can integrate them within a business process and improve that process. Going back to something that's more physical so I can do a more concrete definition, you might take a device like a robotic machine tool and the idea is that the twin captures the structure and the behavior across its lifecycle. As it's designed, as it's built, tested, deployed, operated, and serviced. I don't know if you all know the myth of, in the Greek Gods, one of the Goddesses sprang fully formed from the forehead of Zeus. I forgot who it was. The point of that is digital twin is not going to spring fully formed from any developers head. Getting to the level of fidelity I just described is a journey and a long one. Maybe a decade or more because it's difficult. You have to integrate a lot of data from different systems and you have to add structure and behavior for stuff that's not captured anywhere and may not be captured anywhere. Just for example, CAD data might have design information, manufacturing information might come from there or another system. CRM data might have support information. Maintenance repair and overhaul applications might have information on how it's serviced. Then you also connect the physical version with the digital version with essentially telemetry data that says how its been operating over time. That sort of helps define its behavior so you can manipulate that and predict things or simulate things that you couldn't do with just the physical version. >> You have to think about combined with say 3D printers, you could create a hot physical back up of some malfunctioning thing in the field because you have the entire design, you have the entire history of its behavior and its current state before it went kablooey. Conceivably, it can be fabricated on the fly and reconstituted as a physicologic from the digital twin that was maintained. >> George: Yes, you know what actually that raises a good point which is that the behavior that was represented in the telemetry helps the designer simulate a better version for the next version. Just what you're saying. Then with 3D printing, you can either make a prototype or another instance. Some of the printers are getting sophisticated enough to punch out better versions or parts for better versions. That's a really good point. There's one thing that has to hold all this stuff together which is really kind of difficult, which is challenging technology. IBM calls it a knowledge graph. It's pretty much in anyone's version. They might not call it a knowledge graph. It's a graph is, instead of a tree where you have a parent and then children and then the children have more children, a graph, many things can relate to many things. The reason I point that out is that puts a holistic structure over all these desperate sources of data behavior. You essentially talk to the graph, sort of like with Arnold, talk to the hand. That didn't, I got crickets. (laughing) Let me give you guys the, I put a definitions table in this dock. I had a couple things. Beta models. These are some important terms. Beta model represents the structure but not the behavior of the digital twin. The API represents the behavior of the digital twin and it should conform to the data model for maximum developer usability. Jim, jump in anywhere where you feel like you want to correct or refine. The object model is a combination of the data model and API. You were going to say something? >> Jim: No, I wasn't. >> George: Okay. The object model ultimately is the digital twin. Another way of looking at it, defining the structure and behavior. This sounds like one of these, say "T" words, the canonical model. It's a generic version of the digital twin or really the one where you're going to have a representation that doesn't have customer specific extensions. This is important because the way these things are getting built today is mostly custom spoke and so if you want to be able to reuse work. If someone's building this for you like a system integrator, you want to be able to, or they want to be able to reuse this on the next engagement and you want to be able to take the benefit of what they've learned on the next engagement back to you. There has to be this canonical model that doesn't break every time you essentially add new capabilities. It doesn't break your existing stuff. Knowledge graph again is this thing that holds together all the pieces and makes them look like one coherent hole. I'll get to, I talked briefly about network compatibility and I'll get to level of detail. Let me go back to, I'm sort of doing this from crib notes. We talked about telemetry which is sort of combining the physical and the twin. Again, telemetry's really important because this is like the time series database. It says, this is all the stuff that was going on over time. Then you can look at telemetry data that tells you, we got a dirty power spike and after three of those, this machine sort of started vibrating. That's part of how you're looking to learn about its behavior over time. In that process, models get better and better about predicting and enabling you to optimize their behavior and the business process with which it integrates. I'll give some examples of that. Twins, these digital twins can themselves be composed in levels of detail. I think I used the example of a robotic machine tool. Then you might have a bunch of machine tools on an assembly line and then you might have a bunch of assembly lines in a factory. As you start modeling, not just the single instance, but the collections that higher up and higher levels of extractions, or levels of detail, you get a richer and richer way to model the behavior of your business. More and more of your business. Again, it's not just the assets, but it's some of the processes. Let me now talk a little bit about how the continual improvement works. As Jim was talking about, we have data feedback loops in our machine learning models. Once you have a good quality digital twin in place, you get the benefit of increasing returns from the data feedback loops. In other words, if you can get to a better starting point than your competitor and then you get on the increasing returns of the data feedback loops, that is improving the fidelity of the digital twins now faster than your competitor. For one twin, I'll talk about how you want to make the whole ecosystem of twins sort of self-reinforcing. I'll get to that in a sec. There's another point to make about these data feedback loops which is traditional apps, and this came up with Jim and Neil, traditional apps are static. You want upgrades, you get stuff from the vendor. With digital twins, they're always learning from the customer's data and that has implications when the partner or vendor who helped build it for a customer takes learnings from the customer and goes to a similar customer for another engagement. I'll talk about the implications from that. This is important because it's half packaged application and half bespoke. The fact that you don't have to take the customer's data, but your model learns from the data. Think of it as, I'm not going to take your coffee beans, your data, but I'm going to run or make coffee from your beans and I'm going to take that to the next engagement with another customer who could be your competitor. In other words, you're extracting all the value from the data and that helps modify the behavior of the model and the next guy gets the benefit of it. Dave, this is the stuff where IBM keeps saying, we don't take your data. You're right, but you're taking the juice you squeezed out of it. That's one of my next reports. >> Dave: It's interesting, George. Their contention is, they uniquely, unlike Amazon and Google, don't swap spit, your spit with their competitors. >> George: That's misleading. To say Amazon and Google, those guys aren't building digital twins. Parametric technology is. I've got this definitely from a parametric technical fellow at an AWS event last week, which is they, not only don't use the data, they don't use the structure of the twin either from engagement to engagement. That's a big difference from IBM. I have a quote, Chris O'Connor from IBM Munich saying, "We'll take the data model, "but we won't take the data." I'm like, so you take the coffee from the beans even if you don't take the beans? I'm going to be very specific about saying that saying you don't do what Google and FaceBook do, what they do, it's misleading. >> Dave: My only caution there is do some more vetting and checking. A lot of times what some guy says on a Cube interview, he or she doesn't even know, in my experience. Make sure you validate that. >> George: I'll send it to them for feedback, but it wasn't just him. I got it from the CTO of the IOT division as well. >> Dave: When you were in Munich? >> George: This wasn't on the Cube either. This was by the side of, at the coffee table during our break. >> Dave: I understand and CTO's in theory should know. I can't tell you how many times I've gotten a definitive answer from a pretty senior level person and it turns out it was, either they weren't listening to me or they didn't know or they were just yessing me or whatever. Just be really careful and make sure you do your background checks. >> George: I will. I think the key is leave them room to provide a nuanced answer. It's more of a really, really, really concrete about really specific edge conditions and say do you or don't you. >> Dave: This is a pretty big one. If I'm a CIO, a chief digital officer, a chief data officer, COO, head of IT, head of data science, what should I be doing in this regard? What's the advice? >> George: Okay, can I go through a few more or are we out of time? >> Dave: No, we have time. >> George: Let me do a couple more points. I talked about training a single twin or an instance of a twin and I talked about the acceleration of the learning curve. There's edge analytics, David has educated us with the help of looking at GE Predicts. David, you have been talking about this fpr a long time. You want edge analytics to inform or automate a low latency decision and so this is where you're going to have to run some amount of analytics. Right near the device. Although I got to mention, hopefully this will elicit a chuckle. When you get some vendors telling you what their edge and cloud strategies are. Map R said, we'll have a hadoop cluster that only needs four or five nodes as our edge device. And we'll need five admins to care and feed it. He didn't say the last part, but that obviously isn't going to work. The edge analytics could be things like recalibrating the machine for different tolerance. If it's seeing that it's getting out of the tolerance window or something like that. The cloud, and this is old news for anyone who's been around David, but you're going to have a lot of data, not all of it, but going back to the cloud to train both the instances of each robotic machine tool and the master of that machine tool. The reason is, an instance would be oh I'm operating in a high humidity environment, something like that. Another one would be operating where there's a lot of sand or something that screws up the behavior. Then the master might be something that has behavior that's sort of common to all of them. It's when the training, the training will take place on the instances and the master and will in all likelihood push down versions of each. Next to the physical device process, whatever, you'll have the instance one and a class one and between the two of them, they should give you the optimal view of behavior and the ability to simulate to improve things. It's worth mentioning, again as David found out, not by talking to GE, but by accidentally looking at their documentation, their whole positioning of edge versus cloud is a little bit hand waving and in talking to the guys from ThingWorks which is a division of what used to be called Parametric Technology which is just PTC, it appears that they're negotiating with GE to give them the orchestration and distributed database technology that GE can't build itself. I've heard also from two ISV's, one a major one and one a minor one who are both in the IOT ecosystem one who's part of the GE ecosystem that predicts as a mess. It's analysis paralysis. It's not that they don't have talent, it's just that they're not getting shit done. Anyway, the key thing now is when you get all this - >> David: Just from what I learned when I went to the GE event recently, they're aware of their requirement. They've actually already got some sub parts of the predix which they can put in the cloud, but there needs to be more of it and they're aware of that. >> George: As usual, just another reason I need a red phone hotline to David for any and all questions I have. >> David: Flattery will get you everywhere. >> George: All right. One of the key takeaways, not the action item, but the takeaway for a customer is when you get these data feedback loops reinforcing each other, the instances of say the robotic machine tools to the master, then the instance to the assembly line to the factory, when all that is being orchestrated and all the data is continually enhancing the models as well as the manual process of adding contextual information or new levels of structure, this is when you're on increasing returns sort of curve that really contributes to sustaining competitive advantage. Remember, think of how when Google started off on search, it wasn't just their algorithm, but it was collecting data about which links you picked, in which order and how long you were there that helped them reinforce the search rankings. They got so far ahead of everyone else that even if others had those algorithms, they didn't have that data to help refine the rankings. You get this same process going when you essentially have your ecosystem of learning models across the enterprise sort of all orchestrating. This sounds like motherhood and apple pie and there's going to be a lot of challenges to getting there and I haven't gotten all the warts of having gone through, talked to a lot of customers who've gotten the arrows in the back, but that's the theoretical, really cool end point or position where the entire company becomes a learning organization from these feedback loops. I want to, now that we're in the edit process on the overall digital twin, I do want to do a follow up on IBM's approach. Hopefully we can do it both as a report and then as a version that's for Silicon Angle because that thing I wrote on Cloudera got the immediate attention of Cloudera and Amazon and hopefully we can both provide client proprietary value add, but also the public impact stuff. That's my high level. >> This is fascinating. If you're the Chief of Data Science for example, in a large industrial company, having the ability to compile digital twins of all your edge devices can be extraordinarily valuable because then you can use that data to do more fine-grained segmentation of the different types of edges based on their behavior and their state under various scenarios. Basically then your team of data scientists can then begin to identify the extent to which they need to write different machine learning models that are tuned to the specific requirements or status or behavior of different end points. What I'm getting at is ultimately, you're going to have 10 zillion different categories of edge devices performing in various scenarios. They're going to be driven by an equal variety of machine learning, deep learning AI and all that. All that has to be built up by your data science team in some coherent architecture where there might be a common canonical template that all devices will, all the algorithms and so forth on those devices are being built from. Each of those algorithms will then be tweaked to the specific digital twins profile of each device is what I'm getting at. >> George: That's a great point that I didn't bring up which is folks who remember object oriented programming, not that I ever was able to write a single line of code, but the idea, go into this robotic machine tool, you can inherit a couple of essentially component objects that can also be used in slightly different models, but let's say in this machine tool, there's a model for a spinning device, I forget what it's called. Like a drive shaft. That drive shaft can be in other things as well. Eventually you can compose these twins, even instances of a twin with essentially component models themselves. Thing Works does this. I don't know if GE does this. I don't think IBM does. The interesting thing about IBM is, their go to market really influences their approach to this which is they have this huge industry solutions group and then obviously the global business services group. These guys are all custom development and domain experts so they'll go into, they're literally working with Airbus and with the goal of building a model of a particular airliner. Right now I think they're doing the de-icing subsystem, I don't even remember on which model. In other words they're helping to create this bespoke thing and so that's what actually gets them into trouble with potentially channel conflict or maybe it's more competitor conflict because Airbus is not going to be happy if they take their learnings and go work with Boeing next. Whereas with PTC and Thing Works, at least their professional services arm, they treat this much more like the implementation of a packaged software product and all the learnings stay with the customer. >> Very good. >> Dave: I got a question, George. In terms of the industrial design and engineering aspect of building products, you mentioned PTC which has been in the CAD business and the engineering business for software for 50 years, and Ansis and folks like that who do the simulation of industrial products or any kind of a product that gets built. Is there a natural starting point for digital twin coming out of that area? That would be the vice president of engineering would be the guy that would be a key target for this kind of thinking. >> George: Great point. This is, I think PTC is closely aligned with Terradata and they're attitude is, hey if it's not captured in the CAD tool, then you're just hand waving because you won't have a high fidelity twin. >> Dave: Yeah, it's a logical starting point for any mechanical kind of device. What's a thing built to do and what's it built like? >> George: Yeah, but if it's something that was designed in a CAD tool, yes, but if it's something that was not, then you start having to build it up in a different way. I think, I'm trying to remember, but IBM did not look like they had something that was definitely oriented around CAD. Theirs looked like it was more where the knowledge graph was the core glue that pulled all the structure and behavior together. Again, that was a reflection of their product line which doesn't have a CAD tool and the fact that they're doing these really, really, really bespoke twins. >> Dave: I'm thinking that it strikes me that from the industrial design in engineering area, it's really the individual product is really the focus. That's one part of the map. The dynamic you're pointing at, there's lots of other elements of the map in terms of an operational, a business process. That might be the fleet of wind turbines or the fleet of trucks. How they behave collectively. There's lots of different entry points. I'm just trying to grapple with, isn't the CAD area, the engineering area at least for hard products, have an obvious starting point for users to begin to look at this. The BP of Engineering needs to be on top of this stuff. >> George: That's a great point that I didn't bring up which is, a guy at Microsoft who was their CTO in their IT organization gave me an example which was, you have a pipeline that's 1,000 miles long. It's got 10,000 valves in it, but you're not capturing the CAD design of the valve, you just put a really simple model that measures pressure, temperature, and leakage or something. You string 10,000 of those together into an overall model of the pipeline. That is a low fidelity thing, but that's all they need to start with. Then they can see when they're doing maintenance or when the flow through is higher or what the impact is on each of the different valves or flanges or whatever. It doesn't always have to start with super high fidelity. It depends on which optimizing for. >> Dave: It's funny. I had a conversation years ago with a guy, the engineering McNeil Schwendler if you remember those folks. He was telling us about 30 to 40 years ago when they were doing computational fluid dynamics, they were doing one dimensional computational fluid dynamics if you can imagine that. Then they were able, because of the compute power or whatever, to get the two dimensional computational fluid dynamics and finally they got to three dimensional and they're looking also at four and five dimensional as well. It's serviceable, I guess what I'm saying in that pipeline example, the way that they build that thing or the way that they manage that pipeline is that they did the one dimensional model of a valve is good enough, but over time, maybe a two or three dimensional is going to be better. >> George: That's why I say that this is a journey that's got to take a decade or more. >> Dave: Yeah, definitely. >> Take the example of airplane. The old joke is it's six million parts flying in close formation. It's going to be a while before you fit that in one model. >> Dave: Got it. Yes. Right on. When you have that model, that's pretty cool. All right guys, we're about out of time. I need a little time to prep for my next meeting which is in 15 minutes, but final thoughts. Do you guys feel like this was useful in terms of guiding things that you might be able to write about? >> George: Hugely. This is hugely more valuable than anything we've done as a team. >> Jim: This is great, I learned a lot. >> Dave: Good. Thanks you guys. This has been recorded. It's up on the cloud and I'll figure out how to get it to Peter and we'll go from there. Thanks everybody. (closing thank you's)
SUMMARY :
There you go. and maybe the key issues that you see and is coming even more deeply into the core practice You had mentioned, you rattled off a bunch of parameters. It's all about the core team needs to be, I got a minimal modular, incremental, iterative, iterative, adaptive, and co-locational. in the context of data science, and get automation of many of the aspects everything that these people do needs to be documented that the whole rapid idea development flies in the face of that create the final product that has to go into production and the algorithms and so forth that were used and the working model is obviously a subset that handle the continuous training and retraining David: Is that the right way of doing it, Jim? and come back to sort of what I was trying to get to before Dave: Please, that would be great. so how in the world are you going to agilize that? I think if you try to represent data science the algorithm to be fit for purpose and he said something to me the other day. If you look at - Just to clarify, he said agile's dead? Dave: Go ahead, Jim. and the functional specifications and all that. and all that is increasingly the core that the key aspect of all the data scientists that incorporates the crux of data science Nick, you there? Tough to hear you. pivoting off the Micron news. the ability to create a whole number of nodes. Participant: This latency and the node count At the moment, 3D Crosspoint is a nice to have That is the secret sauce which allows you The latency is incredibly short. Move the processing to that particular node Is Micron the first market with this capability, David? David: Over fabric? and who are coming to market with their own versions. Dave: David? bring the application to the data, Now that the impact of non co-location and you have to do a join, and take the output of that and bring that together. All of the data science is there. NVMe, by definition is a point to point architecture. Dave: Oh, got it. Does that definition of (mumbling) make sense to you guys? Nick: You're emphasizing the network topology, the whole idea was the thrust was performance. of the systems as a whole Then the fact that you have a large memory space is that the amount of data that you can pass through it You just referenced 200 milliseconds or microseconds? David: Did I say milliseconds? Relate that to the five microsecond thing again. anywhere else in the thousand nodes, That's the reason why you can now do what I talked about when you have an MPI like mesh than a ring. They believe that the cost of the equivalent DSSD system Who develops the query optimizer for the system? Jim: The DBMS vendor would have to re-write theirs I don't recognize the voice. Dave: That was Neil. It happens to be very low latency which is that you can get at all the data, Yeah, the array controller is software from a company called That's the company that has produced the software from the ISDs to support this and other developers. and the agility of an organization in the marketplace. AI, the draw of AI is all this training data. for the cloud vendors to not just offer are the SaaS vendors who can take their applications and then there are going to be individual companies Latency and throughput and this starts to push Dave: Okay, good. I guess, I'm not sure you coined it, and the idea is that the twin captures the structure Conceivably, it can be fabricated on the fly and it should conform to the data model and that helps modify the behavior Dave: It's interesting, George. saying, "We'll take the data model, Make sure you validate that. I got it from the CTO of the IOT division as well. This was by the side of, at the coffee table I can't tell you how many times and say do you or don't you. What's the advice? of behavior and the ability to simulate to improve things. of the predix which they can put in the cloud, I need a red phone hotline to David and all the data is continually enhancing the models having the ability to compile digital twins and all the learnings stay with the customer. and the engineering business for software hey if it's not captured in the CAD tool, What's a thing built to do and what's it built like? and the fact that they're doing these that from the industrial design in engineering area, but that's all they need to start with. and finally they got to three dimensional that this is a journey that's got to take It's going to be a while before you fit that I need a little time to prep for my next meeting This is hugely more valuable than anything we've done how to get it to Peter and we'll go from there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Chris O'Connor | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Jim Kobeielus | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Neil | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
1,000 miles | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
195 microseconds | QUANTITY | 0.99+ |