HPE Compute Engineered for your Hybrid World - Accelerate VDI at the Edge
>> Hello everyone. Welcome to theCUBEs coverage of Compute Engineered for your Hybrid World sponsored by HPE and Intel. Today we're going to dive into advanced performance of VDI with the fourth gen Intel Zion scalable processors. Hello I'm John Furrier, the host of theCUBE. My guests today are Alan Chu, Director of Data Center Performance and Competition for Intel as well as Denis Kondakov who's the VDI product manager at HPE, and also joining us is Cynthia Sustiva, CAD/CAM product manager at HPE. Thanks for coming on, really appreciate you guys taking the time. >> Thank you. >> So accelerating VDI to the Edge. That's the topic of this topic here today. Let's get into it, Dennis, tell us about the new HPE ProLiant DL321 Gen 11 server. >> Okay, absolutely. Hello everybody. So HP ProLiant DL320 Gen 11 server is the new age center CCO and density optimized compact server, compact form factor server. It enables to modernize and power at the next generation of workloads in the diverse rec environment at the Edge in an industry standard designed with flexible scale for advanced graphics and compute. So it is one unit, one processor rec optimized server that can be deployed in the enterprise data center as well as at the remote office at end age. >> Cynthia HPE has announced another server, the ProLiant ML350. What can you tell us about that? >> Yeah, so the HPE ProLiant ML350 Gen 11 server is a powerful tower solution for a wide range of workloads. It is ideal for remote office compute with NextGen performance and expandability with two processors in tower form factor. This enables the server to be used not only in the data center environment, but also in the open office space as a powerful workstation use case. >> Dennis mentioned both servers are empowered by the fourth gen Intel Zion scale of process. Can you talk about the relationship between Intel HPE to get this done? How do you guys come together, what's behind the scenes? Share as much as you can. >> Yeah, thanks a lot John. So without a doubt it takes a lot to put all this together and I think the partnership that HPE and Intel bring together is a little bit of a critical point for us to be able to deliver to our customers. And I'm really thrilled to say that these leading Edge solutions that Dennis and Cynthia just talked about, they're built on the foundation of our fourth Gen Z on scalable platform that's trying to meet a wide variety of deployments for today and into the future. So I think the key point of it is we're together trying to drive leading performance with built-in acceleration and in order to deliver a lot of the business values to our customers, both HP and Intels, look to scale, drive down costs and deliver new services. >> You got the fourth Gen Z on, you got the Gen 11 and multiple ProLiants, a lot of action going on. Again, I love when these next gens come out. Can each of you guys comment and share what are the use cases for each of the systems? Because I think what we're looking at here is the next level innovation. What are some of the use cases on the systems? >> Yeah, so for the ML350, in the modern world where more and more data are generated at the Edge, we need to deploy computer infrastructure where the data is generated. So smaller form factor service will satisfy the requirements of S&B customers or remote and branch offices to deliver required performance redundancy where we're needed. This type of locations can be lacking dedicated facilities with strict humidity, temperature and noise isolation control. The server, the ML350 Gen 11 can be used as a powerful workstation sitting under a desk in the office or open space as well as the server for visualized workloads. It is a productivity workhorse with the ability to scale and adapt to any environment. One of the use cases can be for hosting digital workplace for manufacturing CAD/CAM engineering or oil and gas customers industry. So this server can be used as a high end bare metal workstation for local end users or it can be virtualized desktop solution environments for local and remote users. And talk about the DL320 Gen 11, I will pass it on to Dennis. >> Okay. >> Sure. So when we are talking about age of location we are talking about very specific requirements. So we need to provide solution building blocks that will empower and performance efficient, secure available for scaling up and down in a smaller increments than compared to the enterprise data center and of course redundant. So DL 320 Gen 11 server is the perfect server to satisfy all of those requirements. So for example, S&B customers can build a video solution, for example starting with just two HP ProLiant TL320 Gen 11 servers that will provide sufficient performance for high density video solution and at the same time be redundant and enable it for scaling up as required. So for VGI use cases it can be used for high density general VDI without GP acceleration or for a high performance VDI with virtual VGPU. So thanks to the modern modular architecture that is used on the server, it can be tailored for GPU or high density storage deployment with software defined compute and storage environment and to provide greater details on your Intel view I'm going to pass to Alan. >> Thanks a lot Dennis and I loved how you're both seeing the importance of how we scale and the applicability of the use cases of both the ML350 and DL320 solutions. So scalability is certainly a key tenant towards how we're delivering Intel's Zion scalable platform. It is called Zion scalable after all. And we know that deployments are happening in all different sorts of environments. And I think Cynthia you talked a little bit about kind of a environmental factors that go into how we're designing and I think a lot of people think of a traditional data center with all the bells and whistles and cooling technology where it sometimes might just be a dusty closet in the Edge. So we're defining fortunes you see on scalable to kind of tackle all those different environments and keep that in mind. Our SKUs range from low to high power, general purpose to segment optimize. We're supporting long life use cases so that all goes into account in delivering value to our customers. A lot of the latency sensitive nature of these Edge deployments also benefit greatly from monolithic architectures. And with our latest CPUs we do maintain quite a bit of that with many of our SKUs and delivering higher frequencies along with those SKUs optimized for those specific workloads in networking. So in the end we're looking to drive scalability. We're looking to drive value in a lot of our end users most important KPIs, whether it's latency throughput or efficiency and 4th Gen Z on scalable is looking to deliver that with 60 cores up to 60 cores, the most builtin accelerators of any CPUs in the market. And really the true technology transitions of the platform with DDR5, PCIE, Gen five and CXL. >> Love the scalability story, love the performance. We're going to take a break. Thanks Cynthia, Dennis. Now we're going to come back on our next segment after a quick break to discuss the performance and the benefits of the fourth Gen Intel Zion Scalable. You're watching theCUBE, the leader in high tech coverage, be right back. Welcome back around. We're continuing theCUBE's coverage of compute engineer for your hybrid world. I'm John Furrier, I'm joined by Alan Chu from Intel and Denis Konikoff and Cynthia Sistia from HPE. Welcome back. Cynthia, let's start with you. Can you tell us the benefits of the fourth Gen Intel Zion scale process for the HP Gen 11 server? >> Yeah, so HP ProLiant Gen 11 servers support DDR five memory which delivers increased bandwidth and lower power consumption. There are 32 DDR five dim slots with up to eight terabyte total on ML350 and 16 DDR five dim slots with up to two terabytes total on DL320. So we deliver more memory at a greater bandwidth. Also PCIE 5.0 delivers an increased bandwidth and greater number of lanes. So when we say increased number of lanes we need to remember that each lane delivers more bandwidth than lanes of the previous generation plus. Also a flexible storage configuration on HPDO 320 Gen 11 makes it an ideal server for establishing software defined compute and storage solution at the Edge. When we consider a server for VDI workloads, we need to keep the right balance between the number of cords and CPU frequency in order to deliver the desire environment density and noncompromised user experience. So the new server generation supports a greater number of single wide and global wide GPU use to deliver more graphic accelerated virtual desktops per server unit than ever before. HPE ProLiant ML 350 Gen 11 server supports up to four double wide GPUs or up to eight single wide GPUs. When the signing GPU accelerated solutions the number of GPUs available in the system and consistently the number of BGPUs that can be provisioned for VMs in the binding factor rather than CPU course or memory. So HPE ProLiant Gen 11 servers with Intel fourth generation science scalable processors enable us to deliver more virtual desktops per server than ever before. And with that I will pass it on to Alan to provide more details on the new Gen CPU performance. >> Thanks Cynthia. So you brought up I think a really great point earlier about the importance of achieving the right balance. So between the both of us, Intel and HPE, I'm sure we've heard countless feedback about how we should be optimizing efficiency for our customers and with four Gen Z and scalable in HP ProLiant Gen 11 servers I think we achieved just that with our built-in accelerator. So built-in acceleration delivers not only the revolutionary performance, but enables significant offload from valuable core execution. That offload unlocks a lot of previously unrealized execution efficiency. So for example, with quick assist technology built in, running engine X, TLS encryption to drive 65,000 connections per second we can offload up to 47% of the course that do other work. Accelerating AI inferences with AMX, that's 10X higher performance and we're now unlocking realtime inferencing. It's becoming an element in every workload from the data center to the Edge. And lastly, so with faster and more efficient database performance with RocksDB, we're executing with Intel in-memory analytics accelerator we're able to deliver 2X the performance per watt than prior gen. So I'll say it's that kind of offload that is really going to enable more and more virtualized desktops or users for any given deployment. >> Thanks everyone. We still got a lot more to discuss with Cynthia, Dennis and Allen, but we're going to take a break. Quick break before wrapping things up. You're watching theCUBE, the leader in tech coverage. We'll be right back. Okay, welcome back everyone to theCUBEs coverage of Compute Engineered for your Hybrid World. I'm John Furrier. We'll be wrapping up our discussion on advanced performance of VDI with the fourth gen Intel Zion scalable processers. Welcome back everyone. Dennis, we'll start with you. Let's continue our conversation and turn our attention to security. Obviously security is baked in from day zero as they say. What are some of the new security features or the key security features for the HP ProLiant Gen 11 server? >> Sure, I would like to start with the balance, right? We were talking about performance, we were talking about density, but Alan mentioned about the balance. So what about the security? The security is really important aspect especially if we're talking about solutions deployed at the H. When the security is not active but other aspects of the environment become non-important. And HP is uniquely positioned to deliver the best in class security solution on the market starting with the trusted supply chain and factories and silicon route of trust implemented from the factory. So the new ISO6 supports added protection leveraging SPDM for component authorization and not only enabled for the embedded server management, but also it is integrated with HP GreenLake compute ops manager that enables environment for secure and optimized configuration deployment and even lifecycle management starting from the single server deployed on the Edge and all the way up to the full scale distributed data center. So it brings uncompromised and trusted solution to customers fully protected at all tiers, hardware, firmware, hypervisor, operational system application and data. And the new intel CPUs play an important role in the securing of the platform. So Alan- >> Yeah, thanks. So Intel, I think our zero trust strategy toward security is a really great and a really strong parallel to all the focus that HPE is also bringing to that segment and market. We have even invested in a lot of hardware enabled security technologies like SGX designed to enhance data protection at rest in motion and in use. SGX'S application isolation is the most deployed, researched and battle tested confidential computing technology for the data center market and with the smallest trust boundary of any solution in market. So as we've talked about a little bit about virtualized use cases a lot of virtualized applications rely also on encryption whether bulk or specific ciphers. And this is again an area where we've seen the opportunity for offload to Intel's quick assist technology to encrypt within a single data flow. I think Intel and HP together, we are really providing security at all facets of execution today. >> I love that Software Guard Extension, SGX, also silicon root of trust. We've heard a lot about great stuff. Congratulations, security's very critical as we see more and more. Got to be embedded, got to be completely zero trust. Final question for you guys. Can you share any messages you'd like to share with the audience each of you, what should they walk away from this? What's in it for them? What does all this mean? >> Yeah, so I'll start. Yes, so to wrap it up, HPR Proliant Gen 11 servers are built on four generation science scalable processors to enable high density and extreme performance with high performance CDR five memory and PCI 5.0 plus HP engine engineered and validated workload solutions provide better ROI in any consumption model and prefer by a customer from Edge to Cloud. >> Dennis? >> And yeah, so you are talking about all of the great features that the new generation servers are bringing to our customers, but at the same time, customer IT organization should be ready to enable, configure, support, and fine tune all of these great features for the new server generation. And this is not an obvious task. It requires investments, skills, knowledge and experience. And HP is ready to step up and help customers at any desired skill with the HP Greenlake H2 cloud platform that enables customers for cloud like experience and convenience and the flexibility with the security of the infrastructure deployed in the private data center or in the Edge. So while consuming all of the HP solutions, customer have flexibility to choose the right level of the service delivered from HP GreenLake, starting from hardwares as a service and scale up or down is required to consume the full stack of the hardwares and software as a service with an option to paper use. >> Awesome. Alan, final word. >> Yeah. What should we walk away with? >> Yeah, thanks. So I'd say that we've talked a lot about the systems here in question with HP ProLiant Gen 11 and they're delivering on a lot of the business outcomes that our customers require in order to optimize for operational efficiency or to optimize for just to, well maybe just to enable what they want to do in, with their customers enabling new features, enabling new capabilities. Underpinning all of that is our fourth Gen Zion scalable platform. Whether it's the technology transitions that we're driving with DDR5 PCIA Gen 5 or the raw performance efficiency and scalability of the platform in CPU, I think we're here for our customers in delivering to it. >> That's great stuff. Alan, Dennis, Cynthia, thank you so much for taking the time to do a deep dive in the advanced performance of VDI with the fourth Gen Intel Zion scalable process. And congratulations on Gen 11 ProLiant. You get some great servers there and again next Gen's here. Thanks for taking the time. >> Thank you so much for having us here. >> Okay, this is theCUBEs keeps coverage of Compute Engineered for your Hybrid World sponsored by HP and Intel. I'm John Furrier for theCUBE. Accelerate VDI at the Edge. Thanks for watching.
SUMMARY :
the host of theCUBE. That's the topic of this topic here today. in the enterprise data center the ProLiant ML350. but also in the open office space by the fourth gen Intel deliver a lot of the business for each of the systems? One of the use cases can be and at the same time be redundant So in the end we're looking and the benefits of the fourth for VMs in the binding factor rather than from the data center to the Edge. for the HP ProLiant Gen 11 server? and not only enabled for the is the most deployed, got to be completely zero trust. by a customer from Edge to Cloud. of the HP solutions, Alan, final word. What should we walk away with? lot of the business outcomes the time to do a deep dive Accelerate VDI at the Edge.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Denis Kondakov | PERSON | 0.99+ |
Cynthia | PERSON | 0.99+ |
Dennis | PERSON | 0.99+ |
Denis Konikoff | PERSON | 0.99+ |
Alan Chu | PERSON | 0.99+ |
Cynthia Sustiva | PERSON | 0.99+ |
Alan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cynthia Sistia | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2X | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10X | QUANTITY | 0.99+ |
60 cores | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one unit | QUANTITY | 0.99+ |
each lane | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ProLiant Gen 11 | COMMERCIAL_ITEM | 0.99+ |
each | QUANTITY | 0.99+ |
ML350 | COMMERCIAL_ITEM | 0.99+ |
S&B | ORGANIZATION | 0.99+ |
DL320 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
HPDO 320 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
ML350 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
ProLiant ML350 | COMMERCIAL_ITEM | 0.97+ |
two | QUANTITY | 0.97+ |
ProLiant Gen 11 | COMMERCIAL_ITEM | 0.97+ |
DL 320 Gen 11 | COMMERCIAL_ITEM | 0.97+ |
ProLiant DL320 Gen 11 | COMMERCIAL_ITEM | 0.97+ |
single | QUANTITY | 0.97+ |
ProLiant ML350 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
Intels | ORGANIZATION | 0.96+ |
DL320 | COMMERCIAL_ITEM | 0.96+ |
ProLiant DL321 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
ProLiant TL320 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
two processors | QUANTITY | 0.96+ |
Zion | COMMERCIAL_ITEM | 0.95+ |
HPE ProLiant ML 350 Gen 11 | COMMERCIAL_ITEM | 0.95+ |
Zion | TITLE | 0.94+ |
Dheeraj Pandey, Nutanix | Nutanix NEXT Nice 2017
>> Narrator: Live, from Nice, France. It's theCUBE. Covering .NEXT Conference 2017 Europe. Brought to you by Nutanix. (techno music) >> Welcome back, I'm Stu Miniman and this is SiliconANGLE Media's production of theCUBE. Happy to have a welcome back to the program, CEO and Founder of Nutanix, Dheeraj Pandey. The keynote this morning, talking about how Nutanix really going from a traditional enterprise infrastructure company really becoming it's goal of being an iconic software company. So, Dheeraj, bring us up to speed as to you know, how Nutanix positioned itself for this future. >> Yeah, I think it's it's been a rite of passage because you can't start from AWS in day one. You have to sell books, and sell eCommerce. You know, you being in the eCommerce space. It was a 20 years journey for them before they could get into computing and people took them seriously. I mean, look at Apple with iPod, and then iPhone, and the iPad, and then iTunes and app store. And all that stuff was a journey of 15 years. You know, before they could really see that they've arrived. I think for us we had to build the form factor of an iPhone four so that people realize what this hyperconvergence thing was. Before we could go and ship an android as an operating system. 'Cause if hadn't android operating system come first... Just like Windows Mobile operating system was around for a while and nobody really understood how to really go and make money on it. I think we had to build a form factor first. And now that people grock it, now we can really go and make software out of this. And be swell software and make the android version of the iOS itself. And that's the thing. I think, as a company we're challenged to balance these paradoxes. Oh, I thought you were an appliance company and you believe in this Apple like finesse. Polish and attention to detail. How do you apply that to an android like the shboosh model where you leave it to others to go and build handsets and so on. I think that's the challenge that you've taken upon ourselves. Now inside, with the cloud service, we have a lot of control. With appliances, we have somewhat control because we at least know what our hardware is running on. But software we open it up. And opening it up, and yet not giving up on the attention to detail is the challenge that this company has to, actually, really go and undertake. We are looking at a lot of our tools and bill for certifications, and you know, passing the test. The litmus test for hardware and we're trying to figure out how to automate the heck out of it. Make them into cloud services. So that customers can now go an crowdsource certifications. So, there'll be some new paradigms that will emerge and the reason why we are well placed for those kinds of things is because our heritage is appliance. So now when we think of doing software a lot of the tooling, a lot of the automations, certifications, the attention to detail we had we'll need to go and make them into cloud services. We have some of them, like Cicer is a cloud service. X-ray is a cloud service. Foundation is a cloud service. So a lot of these services will then go and make the job of certifying an unknown piece of hardware easier, actually. I mean in fact, even day two and beyond we have what we can NCC which is a service that runs from within prism to do health checks. And every two hours you can do health checks. So if there's a new piece of hardware that we thought we just certified, we need to keep paranoid about it. Stay paranoid about it, and say, look is the hardware really the hardware we wanted it to be. So there's lots of really innovative things we can do as a company that really had the heritage of appliance to go and do software, as well. >> Yeah, absolutely people have always underestimated the interoperability required. Remember when server virtualization rolled out up the BIOS. You know, could make everything go horribly. Even, you know, containers could give you portability and run everywhere. Oh wait, networking and storage. There's considerations there. Do you think it's getting to a point from a maturation of the market that the software... You know, can you in the future take Nutanix to be a fully software company where you kind of let somebody else take care of the hardware pieces and then you just become their software. And then there's service software services. That seem like a likely future? >> Yeah, I think with the right tools, right level of automation, right level of machine learning, right level of talk-back. You know, I say talk-balk, I mean the fact that the hard beats are coming to us we understand what the customers are doing. And with the right level of paranoia day two and beyond. Which is NCC for example, it's, We call it Nutanix Cluster check. And it does like 350 odd health checks on a periodic basis. And it erases the load, and some things like that. With the right level of paranoia I think we can really go and make this work. And by the way, that's where design comes in. Like, how do you think of X-Ray as a service, and Foundation, and Cicer and NCC and so on. I think that's where the real design of a software company that is also not being callous about hardware comes in, actually. So I'm really looking forward to it. I think... it's not just about tech and products. It's also about go-to-market because go-to-market has a change too. I mean, the kind of packaging, and the kind of pricing, the kind of ELA's, sales compensation, channel programs, a lot of those things have to be revisited as well. As upstream engineering, you talk about, there's a lot of downstream go-to market engineering as well, that needs to be done. >> Now, when it comes to go-to-market, partnerships are key of course. There's the channel. You want to grow your sales channel, and grow a piece. But also from a technology standpoint, there's a comment I heard you make earlier this week. You know, Google has the opportunity to be kind of that next partner. As like Dell was a partner to give you pre-IPO credibility Dell's trusted you. Dell, you have Lenovo, you have IBM up on stage there. As a software company, who are the partners that help Nutanix kind of through this next phase? >> I think you mentioned some of them already. You know, the cloud vendors, though, obviously open up. And there will be new ones that'll open up over time as well. Where we're thinking about ways to blur the lines between public and private. Because I think the world, including the public cloud vendors have come to realize that. You know, you can't have silos. You can't have a public cloud that's separate from the private and so on. So being able to blur the lines, there'll be a lot of cloud partners for us as well. I think on the hardware side, we already talked about all of them, actually. Now, HP and Cisco are right now partners, in double quotes, because we go and make our software work on it, you know. But on some levels they'll probably also have to open up. And they're networking partners that've been working with you know, Arista is a good case in point. Lexi's another one. And security partners, like Palo Alto could be a large one over time because we think about what firewalls need to be look like in the next five years, and so on, you know. I think in every way, I look at even Apache foundation. Which is not really a company but the fact that we can really coop a lot of open source and build COM marketplace apps. Where the apps could be spun up in an on-prem environment and a single tenet on-prem environment. And you can drag and drop them into a side merchant intent environment. I think being able to go and do more with Apache. To me it's the... I would say, the biggest game changer for the company would be what else can we do with Apache? You know, 'cause we did a lot the first eight years. I mean, obviously, Linux is a big piece of our overall story, you know. Not just as hypervisor but a controller, and things like that is all Linux based. Which draws the pace of innovation of this company, actually. But beyond Linux, we've used Cassandra and ZooKeeper, RocksDB and things like that. What else can we do with Apache Spark, and Costco, and MariaDB, and things like that. I think we need to go and elevate the definition of infrastructure. To include databases and NoSQL systems, and batch processing hadoop, and things like that. All those things become a part of the overall marketplace story for us, you know. And that's where the really interesting stuff really comes in. >> How do you look at open source from a strategic standpoint from Nutanix? I think it's been phenomenal because we have then operated as a company that's bigger than we are. 'Cause otherwise, I mean, look at VMware. They don't have that goodness. Nor does Microsoft actually. I mean, Amazon is the only one that really goes and makes the best out of open source. >> Explain that, we say Microsoft had a huge push into open source. Especially, you know, kind of publicly the last two or three years. But they've been working on it, they've, you know, heavily embraced containers. You know, they've gone Kubernetes. You know, heavily. >> I'm going to give you examples. I think there's a lot of marchitecture. And what Microsoft is doing is open source. But, of course you know, Linux has to work on Hyper-V. So, that's a given. They cannot make a relevant stack without really making Linux work in Hyper-V. But they tried Hadoop on Windows. And Horton works actually on quartered Hadoop in Windows but there are not too many takers, as you see, you know. Containers will probably continue to make a lot of progress on Linux because of the LXD and LXC engines, and things like that. And there's a lot more momentum on the Linux side of containers then the LB on the Windows side containers >> And even Azure is running more Linux than they are Windows these days. >> Absolutely, now that being said, Azure Stack is still Azure Stack. It's still Hyper-V. It's still system centered, not user-centered and things like that. I think Microsoft software will really, really have to find itself. And change a lot of its thinking to really go and say we truly embrace open source like the way Amazon does. And like the way Facebook does. Like the way Nutanix does, I think. You know, it's a very different way we look at open source. We are much like Facebook and Amazon than someone else. I mean, VMware is way farther away from open source, in that sense. I mean vSphere, overall You know, I mean I would say that it probably is Linux based. ESX is Linux based from 17, 18 years ago. I am sure that curt path has been forked forever. And it's very hard for them to go and uptake from open source from overall upstream stuff actually. That we build, you know I mean, our stuff runs on a palm sized server. A palm sized server, imagine it. And that's where we put in a drone and that's the foundation of an edge cloud for us, in some sense. Our stuff runs on IBM power system because IBM was doing a lot of work with open source KVM that made it easy for us to port it to H-V, and so on. And so, I think H-V is a lot more momentum because it shares that overall core base of open source, as well. And I think, over time we'll do many more things with open source. Including in the platform space. >> Okay, how's Nutanix doing globally. You know, what more do you want to be doing. How would you rate yourself on kind of new tenent as a global company? >> I think it's a great question and it's one of those that's a double edged sword, actually. And I'll tell you what I mean by that. So when you stop growing, non-US business become 50%, 'cause that's pretty much the reflection of ID spend. Half the spend is outside the US, half the spend is within the US. Right from here is 65/35. Which is a very healthy place to be in, actually. I don't want to just think to change to like 50/50 end because that's a proxy for are we stop growing, actually. At the same time, I'd love to be shipping everywhere, because again, I've said that the definition of an enterprise cloud is even more relevant. And, you know, parts of the world that is not US, actually. In that sense, just being able to go and maintain that customer base outside the US. I mean, being able to do it. I mean, you know we recently sold a system in Myanmar, actually. And I was telling my friends that look, now I can die in piece because we have a system in Myanmar, you know. But the very fact that they are partners, and there's the channel community, and there's technology champion and their exports. There are certified people in these remote parts of the world. And the fact that we can support these customers successfully, says a lot about the overall reach of the technology. The fact that it's reliable, the fact that it's easy to use and spin up, and the fact that its easy to get certified on. I think is the core of Nutanix, so I feel good about those things, actually. >> You've reached a certain maturity of product marketed option and we've seen Nutanix starting to spread out into certain things sometimes we call adjacencies. You've talked about some of the different softer pieces. How do you manage the growth, the spread and make sure that, you know, simplicity. We were talking to Seneal this morning about absolutely you want simplicity but you also want to, you know. Where does Nutanix play and where don't they play? You know, where >> That's a great question So, there's a really good book that I was introduced to about two years ago. And it's also... There's some videos on YouTube about this book. It's called, The Founder's Mentality the YouTube video is called The Founder's Mentality, as well. And it talks about this very phenomenon that as companies grow they become complex. So they introduce a problem. It's called the Paradox of Growth. The thing that you want to do, really do, was grow. And that thing that you covered kills you. 'Cause growth creates complexity and complexity is a silent killer of growth. So the thing that you covered is the thing that kills you. And that is the Paradox of Growth, actually. You know, in very simple terms. And then it goes on to talk about what are the things you need to do because you started an insurgent company over time you started acting like you've arrived and you're incumbent now, all of a sudden. And the moment you start thinking like an incumbent you're done, in some sense. What are the headwinds, and what are the tailwinds that you can actually produce to actually stay an insurgent. I think there's some great lessons there about an insurgent mindset, and an owner's mentality and then finally, this obsessions for the front lining. How do you think about customers as the first, last thing. So, I think that's one of the guiding principles of the company. In how can we continue to imbibe the founder's mentality in there as well. Where every employee can be a founder, actually, without really having the founder's tag, and so on. And then internally, there's a lot of things we could do differently, in the way that we do engineering, in the way we do collaboration. I mean, these are all good things to revisit design. Not just the product design piece, but organizational design like what does it mean to have two PIDs a team, and microservices, and product managers, and prism developers and COM developers, assigned to two PIDs a team, and so on. QA developers and so on. So there's a lot of structure that we can put at scale. That continues to make us look small, continues to have accountability at a product manager level so that they act like GM's, as opposed to PM's. Where each of these two PIDs a team are like a quasi PNL. You know they, you can look at them very objectively and you can fund them. If they start to become too big you need to split them. If they are not doing too well, you need to go and kill them, actually. >> Alright, Dheeraj, last question I have for you. Enterprise cloud, I think, you know when it first came out as a term, we said, it was a little bit inspirational. What should we be looking for in a year to really benchmark and show as proof points that it's becoming reality. You know, from Nutanix. >> That's a great point. You know, obviously, when Gartner starts to use the term very close term, you know what I say. Used the term enterprise cloud operating system. And in one of the recent discourses I saw, enterprise cloud operating model. That's very similar to system, vs model, but the operating model of the enterprise cloud is based on the tenants of you know, web skilled engineering you know, the fact that things aren't in commodity servers. Everything is pure software and you have zero differentiation in hardware. And all those differentiation comes in pure software. Infrastructure is cold. All those things are not going away. Now how it becomes easy to use, so that you don't need PhD's to manage it is where consumer grade design comes in. And where you have this notion of prism and calmed that actually come to really help make it easy to use. I think this is the core of enterprise cloud itself, you know. I think, obviously, every layer in this overall cake needs more features, more capability, and so on. But foundationally, it's about web skilled engineering, consumer grade design. And if you're doing these two things getting more workloads, getting more geographies, getting more platforms, getting more features... All those things are basically a rite of passage. You know, you need to continue to do them all the time, actually. >> Alright, Dheeraj, I had a customer on. Said the reason he bought Nutanix was for that fullness of vision. So, always appreciate catching up with you. And we'll be back with lots more coverage here from Nutanix .NEXT, here in Nice, France. I'm Stu Miniman, and you're watching TheCUBE.
SUMMARY :
Brought to you by Nutanix. CEO and Founder of the attention to detail and then you just become their software. and the kind of pricing, You know, Google has the opportunity to be the fact that we can really and makes the best out of open source. kind of publicly the because of the LXD and LXC And even Azure and that's the How would you rate yourself on And the fact that we can support and make sure that, you know, simplicity. And the moment you start Enterprise cloud, I think, you know And in one of the recent Said the reason he bought Nutanix
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dheeraj | PERSON | 0.99+ |
Myanmar | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
NCC | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
android | TITLE | 0.99+ |
Dheeraj Pandey | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Cicer | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
The Founder's Mentality | TITLE | 0.99+ |
iPod | COMMERCIAL_ITEM | 0.99+ |
The Founder's Mentality | TITLE | 0.99+ |
Hadoop | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Windows | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
ESX | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Nice, France | LOCATION | 0.99+ |
Costco | ORGANIZATION | 0.99+ |
Azure Stack | TITLE | 0.98+ |
H-V | TITLE | 0.98+ |
iPhone four | COMMERCIAL_ITEM | 0.98+ |
first eight years | QUANTITY | 0.98+ |
two PIDs | QUANTITY | 0.97+ |
iTunes | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
vSphere | TITLE | 0.97+ |
350 odd health checks | QUANTITY | 0.97+ |
YouTube | ORGANIZATION | 0.97+ |
Chinmay Soman | Flink Forward 2017
>> Welcome back, everyone. We are on the ground at the data Artisans user conference for Flink. It's called Flink Forward. We are at the Kabuki Hotel in lower Pacific Heights in San Francisco. The conference kicked off this morning with some great talks by Uber and Netflix. We have the privilege of having with us Chinmay Soman from Uber. >> Yes. >> Welcome, Chinmay, it's good to have you. >> Thank you. >> You gave a really, really interesting presentation about the pipelines you're building and where Flink fits, but you've also said there's a large deployment of Spark. Help us understand how Flink became a mainstream technology for you, where it fits, and why you chose it. >> Sure. About one year back, when we were starting to evaluate what technology makes sense for the problem space that we are trying to solve, which is neural dynamics. We observed that Spark's theme processing is actually more resource intensive then some of the other technologies we benchmarked. More specifically, it was using more memory and CPU, at that time. That's one... I actually came from the Apache Samza world. It wasn't the same LinkedIn team before I came to Uber. We had in-house expertise on Samza and I think the reliability was the key motivation for choosing Samza. So we started building on top of Apache Samza for almost the last one and a half years. But then, we hit the scale where Samza, we felt, was lacking. So with Samza, it's actually tied into Kafka a lot. You need to make sure your Kafka scales in order for the stream processing to scale. >> In other words, the topics and the partitions of those topics, you have to keep the physical layout of those in mind at the message cue level, in line with the stream processing. >> That's right. The paralysm is actually tied into a number of partitions in Kafka. Further more, if you have a multi-stage pipeline, where one stage processes data and sends output to another stage, all these intermediate stages, today, again go back to Kafka. So if you want to do a lot of these use cases, you actually end up creating a lot of Kafka topics and the I/O overhead on a cluster shoots up exponentially. >> So when creating topics, or creating consumers that do something and then output to producers, if you do too many of those things, you defeat the purpose of low-latency because you're storing everything. >> Yeah. The credit of it is, it is more robust because if you suddenly get a spike in your traffic, your system is going to handle it because Kafka buffers that spike. It gives you a very reliable platform, but it's not cheap. So that's why we're looking at Flink, In Flink, you can actually build a multi-stage pipeline and have in-memory cues instead of writing back to Kafka, so it is fast and you don't have to create multiple topics per pipeline. >> So, let me unpack that just a little bit to be clearer. The in-memory cues give you, obviously, better I/O. >> Yes. >> And if I understand correctly, that can absorb some of the backpressure? >> Yeah, so backpressure is interesting. If you have everything in Kafka and no in-memory cues, there is no backpressure because Kafka is a big buffer, it just keeps running. With in-memory cues, there is backpressure. Another question is, how do you handle this? So going back to Samza systems, they actually degrade and can't recover once they are in backpressure. But Flink, as you've seen, it slows down consuming from Kafka, but once the spike is over, once you're over that hill, it actually recovers quickly. It is able to sustain heavy spikes. >> Okay, so this goes to your issues with keeping up with the growth of data... >> That's right. >> You know, the system, there's multiple leaves of elasticity and then resource intensity. Tell us about that end and the desire to get as many jobs as possible out of a certain level of resource. >> So, today, we are a platform where people come in and say, "Here's my code." Or, "Here's my SQL that I want to run on your platform." In the old days, they were telling us, "Oh, I need 10 gigabytes for a container," and this they need these many CPUs and that really limited how many use cases we onboarded and made our hardware footprint pretty expensive. So we need the pipeline, the infrastructure, to be really memory efficient. What we have seen is memory is the bottle link in our world, more so than CPU. A lot of applications, they consume from Kafka, they actually buffer locally in each container and then they do that in the local memory, in the JVM memory. So we need the memory component to be very efficient and we can pack more jobs on the same cluster if everyone is using lesser memory. That's one motivation. The other thing, for example, that Flink does and Samza also does, is make use of a RocksDB store, which is a local persistent-- >> Oh, that's where it gets the state management. >> That's right, so you can offload from memory on to the disk-- >> Into a proper database. >> Into a proper database and you don't have to cross a network to do that because it's sitting locally. >> Just to elaborate on what might be, what might seem like, a arcane topic, if it's residing locally, than anything it's going to join with has to also be residing locally. >> Yeah, that's a good point. You have to be able to partition your inputs and your state in the same way, otherwise there's no locality. >> Okay, and you'd have to shuffle stuff around the network. >> And more than that, you'd need to be able to recover if something happens because there's no replication for this state. If the hard disk on that DR node crashes, you need to recreate that cache from somewhere. So either you go back and read from Kafka, or you store that cache somewhere. So Flink actually supports this out of the box and it snapshots the RocksDB state into HTFS. >> Got it, okay. It's more resilient--- >> Yes. >> And more resource efficient. So, let me ask one last question. Main stream enterprises, they, or at least the very largest ones, have been trying to wrestle their arms around some opensource projects. Very innovative, the pace of innovation is huge, but it demands a skillset that seems to be most resident in large consumer internet companies. What advice do you have for them where they aspire to use the same technologies that you're talking about to build new systems, but they might not have the skills. >> Right, that's a very good question. I'll try to answer in the way that I can. I think the first thing to do is understand your scale. Even if you're a big, large banking corporation, you need to understand where you fit in the industry ecosystem. If it turns out that your scale isn't that big and you're using it for internal analytics, then you can just pick the off-the-shelf pipelines and make it work. For example, if you don't care about multi-tendency, if your hardware span is not that much, actually anything might actually work. The real challenge is when you pick a technology and make it work for a large use cases and you want to optimize for cost. That's where you need a huge engineering organization. So in simpler words, if your use cases extent is not that big, pick something which has a lot of support from the community. Most more common things just work out-of-the-box, and that's good enough. But if you're doing a lot of complicated things, like real-time machine running, or your scale is in billions of messages per day, or terabytes of data per day, then you really need to make a choice: Whether you invest in an engineering organization that can really understand these use cases; or you go to companies like Databricks. Get a support from Databricks, or... >> Or maybe a cloud vendor? >> Or a cloud vendor, or things like Confluent which is giving Kafka support, things like that. I don't think there is one answer. To me, our use case, for example, the reason we chose to build an engineering organization around that is because our use cases are immensely complicated and not really seen before, so we had to invest in this technology. >> Alright, Chinmay, we're going to leave it on that and hopefully keep the dialogue going-- >> Sure. >> offline. So, we'll be back shortly. We're at Flink Forward, the data Artisans user conference for Flink. We're on the ground at the Kabuki Hotel in downtown San Francisco and we'll be right back.
SUMMARY :
We have the privilege of having with us where it fits, and why you chose it. in order for the stream processing to scale. you have to keep the physical layout of those So if you want to do a lot of these use cases, that do something and then output to producers, and you don't have to create The in-memory cues give you, obviously, better I/O. but once the spike is over, once you're over that hill, Okay, so this goes to your issues with You know, the system, there's multiple leaves and that really limited how many use cases we onboarded Into a proper database and you don't have to going to join with has to also be residing locally. You have to be able to partition Okay, and you'd have to shuffle stuff and it snapshots the RocksDB state into HTFS. It's more resilient--- but it demands a skillset that seems to be and you want to optimize for cost. the reason we chose to build We're on the ground at the Kabuki Hotel
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Databricks | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Chinmay | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Chinmay Soman | PERSON | 0.99+ |
Kafka | TITLE | 0.99+ |
Confluent | ORGANIZATION | 0.99+ |
Flink | ORGANIZATION | 0.99+ |
10 gigabytes | QUANTITY | 0.99+ |
each container | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.98+ |
today | DATE | 0.98+ |
one answer | QUANTITY | 0.98+ |
Apache | ORGANIZATION | 0.98+ |
2017 | DATE | 0.97+ |
one last question | QUANTITY | 0.95+ |
first thing | QUANTITY | 0.95+ |
Spark | TITLE | 0.93+ |
Pacific Heights | LOCATION | 0.91+ |
this morning | DATE | 0.86+ |
Kabuki Hotel | LOCATION | 0.85+ |
RocksDB | TITLE | 0.83+ |
About one year back | DATE | 0.82+ |
terabytes of data | QUANTITY | 0.82+ |
one motivation | QUANTITY | 0.8+ |
SQL | TITLE | 0.8+ |
Forward | EVENT | 0.78+ |
Samza | ORGANIZATION | 0.74+ |
Samza | TITLE | 0.73+ |
one stage | QUANTITY | 0.73+ |
billions of messages per day | QUANTITY | 0.72+ |
Artisans | EVENT | 0.7+ |
last one and a half years | DATE | 0.69+ |
Artisans user | EVENT | 0.62+ |
Samza | COMMERCIAL_ITEM | 0.34+ |