Sunil Potti, Nutanix | Nutanix .NEXT Conference 2019
>> Voiceover: Live! From Anahiem, California, it's theCUBE. Covering Nutanix.next 2019 Brought to you by Nutanix. >> Welcome back everyone to theCUBE's live coverage of Nutanix.next, here in Anaheim California, I'm your host, Rebecca Knight along with my co-host John Furrier. We're joined by Sunil Potti, he is the chief product and development officer here at Nutanix. Thank you so much for coming on the show. >> Glad to be here. >> So we are talking about the era of invisible infrastructure and this morning on the main stage there was many many different announcements, new products and adjustments, augmentations to products. Can you walk our viewers a little bit, walk our viewers through a little bit what you were talking about today? >> Yeah, I mean (inaudible) so in fact, our vision really hasn't materially changed over the last few years. In fact, my team always teases me that all I do is essentially change the timeline but the same slideshow is up. But you know, something about vision being consistent and we sort of have broken that up into two major phases, the first phase is essentially to move cloud from being a destination to being an experience. What do I mean by that? Essentially, everyone knows about cloud as being something served by Amazon, or Google, or (inaudible) and ultimately, our belief has been that if we do an honest job of what Amazon or Google provided natively But bring cloud to the customers rather than having the customers go to a destination, Then they can essentially get maybe 60 or 70 percent of that experience but maybe at a tenth of the price or a tenth of the time. And most human beings as you guys know, is that once you get 60 or 70 percent, You're happy and you move on other things. And that's really the first act of this company is to sort of bring cloud to the customers. And in doing so, in my opinion solves one of clouds biggest, you know, perennial issues, which is migration. Because that's essentially what lift and shift, gets in the way, that I've gotta change something that I've invested 20 years in and I've gotta lift and shift it. And if something comes to you, that gap is dramatically reduced, right? And sure, we don't do everything that public clouds do but, like I said, if you can do an honest job of that 60 % then it turns out that most customers now adopt Nutanix looking at public cloud as more of a tailwind instead of a headwind because the more they taste amazon outside the more they want amazon inside. And so, so, that's really the first act of the company. A series of products that allow us to build out a full blown IA stack but also a bunch of services such as desktops, databases, all the usual services. So it's all about increasing the layers of abstraction to the user so they can do one take operations. So, that's the first act. And the second act which is much more a longer term bet for the next decade or so is that if the first act was about bringing cloud to you to replatform the data center, customers are also going to redesign their apps and when they redesign their apps Do you want to do it on an operating system that locks you only into one public cloud? Or do you want to do it in something that can moves across clouds? And that's our second act of the company. And there's a lot of details there. >> John Furrier: So hyper-convergence was a great concept and proved it out, great customer base, core business is humming along, solid, but the growth is gonna come from essentials which is the enterprise in multiple clouds. So I get that. As you guys look and build those products and you're the chief product officer, you have the keys to the kingdom, it's all on you. >> It's in my guide to work out. >> So you're a team. But this is a big pressure, this is the opportunity. As you think about a software company as you guys are shifting from being hardware to software things start to be different so as you start thinking about the act two the convergence of clouds. That really is a key part of it, what you did for the data center, HCI, >> Yeah, totally. >> You're doing HCI for the cloud. >> Yeah, like what does that actually mean? >> So explain that concept. >> No, it's a great question. So, and some of this, obviously, we are struggling through ourselves. But we are not afraid of making mistakes in this transition as you've seen other the last year, we've gone from being in the plans company to a software that runs on third party to being a subscription company, to now running on clouds. All within a span of 12 months, while building a business, right? And sometimes it works, sometimes we pick up ourselves and learn from mistakes and go but to your point I think, we're not afraid to become an app on somebody else's operating system. Just like Microsoft said "Look I'm gonna release office, "on Mac or Ipad before I even do it on Windows," that kind of thinking has to permeate and pretty much, in my opinion, every technology will end up going forward. A good example of that is look, if somebody wants to consume their applications that they built on Nutanix on premise but their idea was look they don't wanna be in the data center business tomorrow without changing the apps they should be able to take that entire infrastructure and applications and consume it inside Amazon's fabric because they provide a bunch of other services as well as data centers. So, a recent announcement of Nutanix in AWS not on AWS for a reason is an example of us becoming an app on somebody else's operating system. That's an example of us transforming further away from being an infrastructure only or an appliance only company. >> What does this mean for your customers and your partners because you guys have taken an open strategy with partnering, the HPE announcements, very successfully off the tee, in the middle of the fair way as we say, looking good. That seems to be the trend, others taking a different approach, you know that is, owning it all. >> Yeah yeah, in fact I would say that look, in some way, internally we joke about ourselves, as we have to prove the... You know, we always used to think about ourselves as a smart phone for the enterprise, consumerizing the data center. But we had to prove that model by owning the full stack like Apple did, but over a period of time, to democratization happens, by distribution. And so in some ways, we have to become more of an android like company while retaining the best practices of the delight and the security of an apple device. So that's the easiest analogy where, We're trying to work with partners like Dell, Lenovo, and now increasingly, Hitachi, Fujitsu, Inspur, Intel, everybody is signed up, just because everybody now knows that the customers want an experience. And now the lastest relationship with HP takes it to the next level now where we want to bring essentailly super micro like appliance goodness one click from away upgrades, support, everything. But with a HPE backed platform, that both companies can benefit from. >> You know, one of the big complaints from customers, I hear, on theCUBE, and also privately is there's so many tools, and management software, I've got management plane for this, I got this over here, >> For sure... >> So there's kinda this toolshed mentality of, you know, a new hire, learn this tool for that software, people don't want another tool, they don't want another platform. So, how do you see that, how do you address that with going forward, this act two, as you continue to build the products what's the strategy and what's the value proposition for customers? >> I mean, think it's no different than I think how we sort of launched the company in the first place which is there's no way you can say we'll simplify your life without removing parts. That was the original Steve Jobs thing, right? The true way to simplify is to remove parts, right? And essentially that's what hyper-convergence has done, it just we're doing this not just for infrastructure but for clouds because when you use Nutanix you throw away old computer, you throw away old storage, you throw away old (inaudible) I mean, that's the only way to converge your experience down to one tool. You can't stitch together ten tools into this magical fabric, I mean it doesn't work that way. But that's hard, because not every customer is ready to do that, every partner is ready to do that they've got their own little incumbencies. But that's the journey we're on, it's a right of passage for us, we have to earn it the old fashioned way and we've done reasonably well so far. >> So you mentioned Steve Jobs, he also said, when he was alive, in an interview, on the lost interviews on Netflix, I watched that recently. He said, also software gives you the opportunity to move the needle on efficiencies, and change the game, much more significantly then managing a process improvement which can give you maybe 30% yield. He's saying you can go 60s, 80% changeover with software. This is part of your strategy, how do you guys see Nutanix in the future, with the software lead or approach, changing the game for IT? >> I think clearly, software is fundamental, I mean the whole point of us, our product was I think, we have some folks on the platform group that help make sure that the software runs because software has to run somewhere, by the way. It doesn't run in air, it runs on hardware. So let's not under emphasize hardware for that reason, but, most of our IP has been in software. But I would say that the real thing for us that has kept us going is design of software which is essentially also, when you go back to the Apple thing, because a lot of software renders out that too. It's how you design it, starting with why, rather than just going to the how, is how we see ourselves differentiating what we deliver to our customers over the next 5 years. >> Rebecca Knight: I want to ask you about innovation and your process because here you are, you're the Chief product officer at this very creative company, I wanna know, what sparks you're creativity, where do you get your ideas? Of course you're gonna say, "I talk to customers, "and I find out their problems", but where do you go for inspiration? >> Yeah, I think it's an age old problem I'll give you my personal answer, I don't think it's representative of everyone in the company obviously. And that's one of the good things with Nutanix each of us have their own point of view and things, right? We have this term of "let chaos reign and then reign in chaos". Right? To some extent. That has been done well at other companies like Google, and so forth. So, I've always believed in a couple of vectors for inspiration. The most obvious one is to listen to others. More than talk. Whether it's listening to customers, listening to partners, listening to other employees with other ideas and have a curated way to do that because if you only listen to customers you build faster horses not carts, as Henry Ford said, okay? So that's the what I would call a generic theme and you'd think that it's easy to do so, but it's very hard to truly listen from signal from the noise by the way. So there's an art there that one has to get better at. But the DNA has to be there to listen that's the first thing I would say. The second thing which I think is maybe deeper, and that's probably more in the... The first one applies to maybe 1% The second one, probably applies to .001% which is having intuition of what's right. And this ability, people call it, I don't know, big words like vision and so forth the ability to see around corners and anticipate, you know, my old manager, a guy that I respect a lot, Mark Templeton who was the CEO for Citrix, used to always ask this question "Do you know why Michelin has three stars? "The first star is for food, obviously, "there has to be good food. "The second star is for service. "The third star, not many people know why it's for" According to him, and I haven't really checked it yet, I haven't really eaten in too many Michelin three star restaurants, is anticipation. And product strategy is a little bit like that, right? So to me, that's where Nutanix really trumps the competition. Is that second dimension of intuition. More so than even, listening to customers. >> It's seeing around those corners, and knowing which way the winds are blowing. >> Totally. >> One of the other things that we're talking about a lot about, here on theCUBE, particularly at this conference, is the importance of culture. Nutanix...we had Dheeraj on this morning talking about the sort of playful nature that he tries to bring to the company, and that really has filtered down, how would you describe the Nutanix culture and how do you maintain the culture? >> So I think, we... I'll tell you personally, the journey that I was on, that there were a couple of things that I brought to the table, a couple things that I learned myself, as well as what I could see, a couple things that you'll see in a company that has been built by founders, in my opinion, I'm not a founder, or entrepreneur myself, but I've seen them in action now, is they bring one dimension that I've not seen in big company leaders, which is continuous learning. Because that's the only way they can stay in the company when it goes from 0 to ninety, right? And the folks that continuously learn, stay. If they don't, they leave and we get professional leaders. So, continuous learning, if it can be applied, to the generic company becomes an amplifying effect now. People can learn how to grow, look around the corners, they can learn things, that otherwise they aren't born with, in my opinion. So I think that's one unique dimension that Nutanix I think, inculcates in a lot of people, is this continuous learning. The other dimension, which I think, everybody knows about Nutanix being this humble, hungry, honest, with heart, you know those four words sort of capture the, a sense of, the playful, authenticity. But I think we're not afraid to be wrong. And, we're not afraid to make fun of ourselves. We're not afraid to be, I guess, ourselves, right? And that, I think is easy to say, but very hard to do. >> John Furrier: You learn through your mistakes as they say, learn through failure. So, you mention intuition. What does your intuition tell you about the current ecosystem as the market starts to really accelerate with multi cloud on premise private cloud, which by the way, good intuition, of course we keep on, at the first private cloud reports dominion and team, they got that right. The waves are coming and they look different. There's gonna be more integration we think. What does your intuition tell you about these next couple waves that are gonna come in to the landscape of the tech industry? >> Yeah, I mean I think, since I do want to come back on theCUBE again and again, and have something left over, I will say one thing though, is I think the gain in multi cloud is going to move up the stack, okay? That's where the next set of cloud wars are going to be fought. Is whose going to provide not just a great database as a service, but a great database itself. Because, Oracle's time's up, as far as I'm concerned, right? And you're going to see that with many traditional software stacks, some of them are Sass stacks that have been around for 20 years, by the way. Some of the largest Sass companies have been around for 20 years. It's time for a reboot for most of those companies. >> How about the Edge? What does the intuition tell you on the Edge? Certainly very relevant, you've got power, you've got connectivity expanding, Wifi 6 around the corner, we've seen that. 5g, okay, I buy it. But as it really starts to figure itself out, it's just another note on the network. What's your intuition tell you? >> Yeah, I mean, this is one area that I'm not too deep in, I've got other guys in my team who know a lot more, but, my intuition tells me, the more things change, the more they'll remain the same, in that area, right? So don't be surprised if they just end up being another smart phone. You know, its got an operating system, it runs apps, it's centrally controlled, talks to services in the back end, I see no reason why the Edge should be any different, if that make sense. >> John Furrier: Yeah, exactly. Then data, big part of it. Big part of your strategy, the data piece, >> Of course, of course, yeah. I mean I think data being a core competency of any company is going to stand out, I think in the next 5, 10 years. >> John Furrier: Awesome. What's going on at the show? What's been your hottest conversation in the hallways, talking to customers, partners, employees, what's some of the trending conversation? >> I don't know, this conversations pretty interesting! (laughs) >> Of course! >> Rebecca Knight: We agree! (Laughs) >> My intuition is telling me this is a good conversation! Hope it comes out good! >> Keep using that word man. >> I love it! >> Anyway, always great to be with you guys. >> Sunil, thank you so much for returning to theCUBE. >> Anytime. >> I'm Rebecca Knight, for John Furrier, we will have much more from Nutanix.next coming up in just a little bit. Stay with us. (upbeat music)
SUMMARY :
Brought to you by Nutanix. he is the chief product and development officer what you were talking about today? is that if the first act was about bringing cloud to you but the growth is gonna come from essentials what you did for the data center, HCI, that kind of thinking has to permeate That seems to be the trend, And now the lastest relationship with HP this act two, as you continue to build the products I mean, that's the only way in an interview, on the lost interviews on Netflix, that help make sure that the software runs But the DNA has to be there to listen knowing which way the winds are blowing. One of the other things that we're talking about I brought to the table, gonna come in to the landscape of the tech industry? Some of the largest Sass companies But as it really starts to figure itself out, the more things change, the more they'll remain the same, Big part of your strategy, the data piece, in the next 5, 10 years. in the hallways, talking to customers, we will have much more from Nutanix.next
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Hitachi | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Mark Templeton | PERSON | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Sunil Potti | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Nutanix | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Inspur | ORGANIZATION | 0.99+ |
Sunil | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
third star | QUANTITY | 0.99+ |
Henry Ford | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
60 % | QUANTITY | 0.99+ |
first star | QUANTITY | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
70 percent | QUANTITY | 0.99+ |
second star | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
android | TITLE | 0.99+ |
.001% | QUANTITY | 0.99+ |
60s | QUANTITY | 0.99+ |
Michelin | ORGANIZATION | 0.99+ |
first phase | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
Dheeraj | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
first act | QUANTITY | 0.99+ |
ten tools | QUANTITY | 0.99+ |
Anaheim California | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
Ipad | COMMERCIAL_ITEM | 0.99+ |
three star | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.99+ |
four words | QUANTITY | 0.99+ |
0 | QUANTITY | 0.98+ |
both companies | QUANTITY | 0.98+ |
second act | QUANTITY | 0.98+ |
Nutanix.next | ORGANIZATION | 0.98+ |
tenth | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
second dimension | QUANTITY | 0.98+ |
1% | QUANTITY | 0.98+ |
ninety | QUANTITY | 0.98+ |
Mac | COMMERCIAL_ITEM | 0.98+ |
apple | ORGANIZATION | 0.98+ |
two major phases | QUANTITY | 0.98+ |
three stars | QUANTITY | 0.97+ |
second thing | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
Sumit Puri, Liqid | CUBEConversation, March 2019
(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We're at our Palo Alto studios having a CUBE Conversation, we're just about ready for the madness of the conference season to start in a few months, so it's nice to have some time to have things a little calmer in the studio, and we're excited to have a new company, I guess they're not that new, but they're relatively new, they've been working on a really interesting technology around infrastructure, and we welcome to the studio, first time, I think, Sumit Puri, CEO and co-founder of Liqid, welcome. >> Thank you guys, very very happy to be here. >> And joined by our big brain, David Floyer, of course, the CTO and co-founder of Wikibon and knows all things infrastructure. Dave, always good to see you. >> It's so good to see you. >> All right, so let's jump into this, Sumit, give us the basic overview of Liqid, what are you guys all about, little bit of the company background, how long you've been around. No, absolutely, absolutely, Liqid is a software-defined infrastructure company, the technology that we've developed is referred to as composable infrastructure, think, dynamic infrastructure, and what we do, is we go and we turn data center resources from statically-configured boxes to dynamic, agile infrastructure. Our core technology is two-part. Number 1, we have a fabric layer, that allows you to interconnect off-the-shelf hardware, but more importantly, we have a software layer, that allows you to orchestrate, or dynamically configure servers, at the bare metal. >> So, who are you selling these solutions to? What's your market, what's the business case for this solution? >> Absolutely, so first, I guess, let me explain a little bit about what we mean by composable infrastructure. Rather than building servers by plugging devices into the sockets of the motherboard, with composability it's all about pools, or trays, of resources. A tray of CPUs, a tray of SSDs, a tray of GPUs, a tray of networking devices, instead of plugging those into a motherboard, we connect those into a fabric switch, and then we come in with our software, and we orchestrate, or recompose, at the bare metal. Grab this CPU, grab those four SSDs, these eight GPUs, and build me a server, just like you were plugging devices into the motherboard, except you're defining it in software, on the other side, you're getting delivered infrastructure of any size, shape, or ratio that you want. Except that infrastructure is dynamic, when we need another GPU in our server, we don't send a guy with a cart to plug the device in, we reprogram the fabric and add or remove devices as required by the application. We give you all the flexibility that you would get from public cloud, on the infrastructure that you are forced to own. And now, to answer your question of where we find a natural fit for our solution, one primary area is obviously cloud. If you're building a cloud environment, whether you're providing cloud as a service or whether you're providing cloud to your internal customers, building a more dynamic, agile cloud is what we enable. >> So, is the use case more just to use your available resources and reconfigure it to set something that basically runs that way for a while, or are customers more using it to dynamically reconfigure those resources based on, say, a temporary workload, is kind of a classic cloud example, where you need a bunch of something now, but not necessarily forever. >> Sure. The way we look at the world is very much around resource utilization. I'm buying this very expensive hardware, I'm deploying it into my data center, typical resource utilization is very low, below 20%, right? So what we enable is the ability to get better resource utilization out of the hardware that you're deploying inside your data center. If we can take a resource that's utilized 20% of the time because it's deployed as a static element inside of a box and we can raise the utilization to 40%, does that mean we are buying less hardware inside of our data center? Our argument is yes, if we can take rack scale efficiency from 20% to 40%, our belief is we can do the same amount of work with less hardware. >> So it's a fairly simple business case, then. To do that. So who are your competition in this area? Is it people like HP or Intel, or, >> That's a great question, I think both of those are interesting companies, I think HPE is the 800-pound gorilla in this term called composability and we find ourselves a slightly different approach than the way that those guys take it, I think first and foremost, the way that we're different is because we're disaggregated, right? When we sell you trays of resources, we'll sell you a tray of SSD or a tray of GPUs, where HP takes a converged solution, right? Every time I'm buying resources for my composable rack, I'm paying for CPUs, SSDs, GPUs, all of those devices as a converged resource, so they are converged, we are disaggregated. We are bare metal, we have a PCIe-based fabric up and down the rack, they are an ethernet-based fabric, there are no ethernet SSDs, there are no ethernet GPUs, at least today, so by using ethernet as your fabric, they're forced to do virtualization protocol translation, so they are not truly bare metal. We are bare metal, we view of them more as a virtualized solution. We're an open ecosystem, we're hardware-agnostic, right? We allow our customers to use whatever hardware that they're using in their environment today. Once you've kind of gone down that HP route, it's very much a closed environment. >> So what about some of the customers that you've got? Which sort of industries, which sort of customers, I presume this is for the larger types of customers, in general, but say a little bit about where you're making a difference. >> No, absolutely, right? So, obviously at scale, composability has even more benefit than in smaller deployments, I'll give you just a couple of use case examples. Number one, we're working with a transportation company, and what happens with them at 5 p.m. is actually very different than what happens at 2 a.m., and the model that they have today is a bunch of static boxes and they're playing a game of workload matching. If the workload that comes in fits the appropriate box, then the world is good. If the workload that comes in ends up on a machine that's oversized, then resources are being wasted, and what they said was, "We want to take a new approach. "We want to study the workload as it comes in, "dynamically spin up small, medium, large, "depending on what that workload requires, "and as soon as that workload is done, "free the resources back into the general pool." Right, so that's one customer, by taking a dynamic approach, they're changing the TCO argument inside of their environment. And for them, it's not a matter of am I going dynamic or am I going static, everyone knows dynamic infrastructure is better, no one says, "Give me the static stuff." For them, it's am I going public cloud, or am I going on prem. That's really the question, so what we provide is public cloud is very easy, but when you start thinking about next-generation workloads, things that leverage GPUs and FPGAs, those instantiations on public cloud are just not very cheap. So we give you all of that flexibility that you're getting on public cloud, but we save you money by giving you that capability on prem. So that's use case number one. Another use case is very exciting for us, we're working with a studio down in southern California, and they leverage these NVIDIA V100 GPUs. During the daytime, they give those GPUs to their AI engineers, when the AI engineers go home at night, they reprogram the fabric and they use those same GPUs for rendering workloads. They've taken $50,000 worth of hardware and they've doubled the utilization of that hardware. >> The other use case we talked about before we turned the cameras on there, was pretty interesting, was kind of multiple workloads against the same data set, over a series of time where you want to apply different resources. I wonder if you can unpack that a little bit because I think that's a really interesting one that we don't hear a lot about. So, we would say about 60 plus to 70% of our deployments in one way or another touch the realm of AI. AI is actually not an event, AI is a workflow, what do we do? First we ingest data, that's very networking-centric. Then we scrub and we clean the data, that's actually CPU-centric. Then we're running inference, and then we're running training, that's GPU-centric. Data has gravity, right? It's very difficult to move petabytes of data around, so what we enable is the composable AI platform, leave data at the center of the universe, reorchestrate your compute, networking, GPU resources around the data. That's the way that we believe that AI is approached. >> So we're looking forward in the future. What are you seeing where you can make a difference in this? I mean, a lot of changes happening, there's Gen 4 coming out in PCIe, there's GPUs which are moving down to the edge, how do see, where do you see you're going to make a difference, over the next few years. >> That's a great question. So I think there's 2 parts to look at, right? Number one is the physical layer, right? Today we build or we compose based upon PCIe Gen 3 because for the first time in the data center, everything is speaking a common language. When SSDs moved to NVMe, you had SSDs, network cards, GPUs, CPUs, all speaking a common language which was PCIe. So that's why we've chosen to build our fabric on this common interconnect, because that's how we enable bare metal orchestration without translation and virtualization, right? Today, it's PCIe Gen 3, as the industry moves forward, Gen 4 is coming. Gen 4 is here. We've actually announced our first PCIe Gen 4 products already, and by the end of this year, Gen 4 will become extremely relevant into the market. Our software has been architected from the beginning to be physical layer-agnostic, so whether we're talking PCIe Gen 3, PCIe Gen 4, in the future something referred to as Gen Z, (laughing) it doesn't matter for us, we will support all of those physical layers. For us it's about the software orchestration. >> I would imagine, too, like TPUs and other physical units that are going to be introduced in the system, too, you're architected to be able to take those, new-- >> Today, today we're doing CPUs, GPUs, NVMe devices and we're doing NICs. We just made an announcement, now we're orchestrating Optane memory with Intel. We've made an announcement with Xilinx where we're orchestrating FPGAs with Xilinx. So this will continue, we'll continue to find more and more of the resources that we'll be able to orchestrate for a very simple reason, everything has a common interconnect, and that common interconnect is PCIe. >> So this is an exciting time in your existence. Where are you? I mean, how far along are you to becoming the standard in this industry? >> Yeah, no, that's a great question, and I think, we get asked a lot is what company are you most similar to or are you most like at the early stage. And what we say is we, a lot of time, compare ourselves to VMware, right? VMware is the hypervisor for the virtualization layer. We view ourselves as that physical hypervisor, right? We do for physical infrastructure what VMware is doing for virtualized environments. And just like VMware has enabled many of the market players to get virtualized, our hope is we're going to enable many of the market players to become composable. We're very excited about our partnership with Inspur, just recently we've announced, they're the number three server vendor in the world, we've announced an AI-centric rack, which leverages the servers and the storage solutions from Inspur tied to our fabric to deliver a composable AI platform. >> That's great. >> Yeah, and it seems like the market for cloud service providers, 'cause we always talk about the big ones, but there's a lot of them, all over the world, is a perfect use case for you, because now they can actually offer the benefits of cloud flexibility by leveraging your infrastructure to get more miles out of their investments into their backend. >> Absolutely, cloud, cloud service providers, and private cloud, that's a big market and opportunity for us, and we're not necessarily chasing the big seven hyperscalers, right? We'd love to partner with them, but for us, there's 300 other companies out there that can use the benefit of our technology. So they necessarily don't have the R&D dollars available that some of the big guys have, so we come in with our technology and we enable those cloud service providers to be more agile, to be more competitive. >> All right, Sumit, before we let you go, season's coming up, we were just at RSA yesterday, big shows comin' up in May, where you guys, are we going to cross paths over the next several weeks or months? >> No, absolutely, we got a handful of shows coming up, very exciting season for us, we're going to be at the OCP, the Open Compute Project conference, actually next week, and then right after that, we're going to be at the NVIDIA GPU Technology Conference, we're going to have a booth at both of those shows, and we're going to be doing live demos of our composable platform, and then at the end of April, we're going to be at the Dell Technology World conference in Las Vegas, where we're going to have a large booth and we're going to be doing some very exciting demos with the Dell team. >> Sumit, thanks for taking a few minutes out of your day to tell us a story, it's pretty exciting stuff, 'cause this whole flexibility is such an important piece of the whole cloud value proposition, and you guys are delivering it all over the place. >> Well, thank you guys for making the time today, I was excited to be here, thank you. >> All right, David, always good to see you, >> Good to see you. >> Smart man, alright, I'm Jeff Frick, you're watching theCUBE from theCUBE studios in Palo Alto, thanks for watching, we'll see you next time. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, of the conference season to start in a few months, of course, the CTO and co-founder of Wikibon little bit of the company background, and then we come in with our software, So, is the use case more just to use from 20% to 40%, our belief is we can do So who are your competition in this area? When we sell you trays of resources, So what about some of the customers that you've got? So we give you all of that flexibility That's the way that we believe that AI is approached. how do see, where do you see you're going to make a difference, and by the end of this year, of the resources that we'll be able to orchestrate I mean, how far along are you many of the market players to become composable. the benefits of cloud flexibility that some of the big guys have, so we come in and then right after that, we're going to be at of the whole cloud value proposition, Well, thank you guys for making the time today, thanks for watching, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
$50,000 | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Sumit Puri | PERSON | 0.99+ |
2 a.m. | DATE | 0.99+ |
40% | QUANTITY | 0.99+ |
2 parts | QUANTITY | 0.99+ |
Sumit | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Inspur | ORGANIZATION | 0.99+ |
5 p.m. | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
March 2019 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
800-pound | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
May | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Liqid | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
two-part | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Liqid | PERSON | 0.98+ |
NVIDIA | ORGANIZATION | 0.98+ |
Today | DATE | 0.98+ |
300 other companies | QUANTITY | 0.98+ |
Xilinx | ORGANIZATION | 0.98+ |
70% | QUANTITY | 0.97+ |
southern California | LOCATION | 0.96+ |
Dell Technology World | EVENT | 0.96+ |
one customer | QUANTITY | 0.95+ |
end of April | DATE | 0.95+ |
end of this year | DATE | 0.95+ |
Open Compute Project conference | EVENT | 0.95+ |
CUBE Conversation | EVENT | 0.95+ |
V100 | COMMERCIAL_ITEM | 0.93+ |
NVIDIA GPU Technology Conference | EVENT | 0.93+ |
about 60 plus | QUANTITY | 0.93+ |
below 20% | QUANTITY | 0.93+ |
OCP | EVENT | 0.92+ |
VMware | ORGANIZATION | 0.91+ |
four | QUANTITY | 0.91+ |
Palo Alto, California | LOCATION | 0.89+ |
Silicon Valley, | LOCATION | 0.88+ |
eight GPUs | QUANTITY | 0.88+ |
one way | QUANTITY | 0.86+ |
seven hyperscalers | QUANTITY | 0.86+ |
petabytes | QUANTITY | 0.86+ |
PCIe Gen 3 | OTHER | 0.85+ |
Gen Z | OTHER | 0.8+ |
Gen 4 | OTHER | 0.79+ |
next several weeks | DATE | 0.76+ |
PCIe Gen 4 | COMMERCIAL_ITEM | 0.74+ |
handful of shows | QUANTITY | 0.74+ |
three server vendor | QUANTITY | 0.73+ |
years | DATE | 0.69+ |
case | QUANTITY | 0.69+ |
one primary area | QUANTITY | 0.69+ |
one | QUANTITY | 0.68+ |
VMware | TITLE | 0.67+ |
Number 1 | QUANTITY | 0.67+ |
theCUBE | ORGANIZATION | 0.64+ |
PCIe Gen 4 | OTHER | 0.61+ |
Ken King & Sumit Gupta, IBM | IBM Think 2018
>> Narrator: Live from Las Vegas, it's the Cube, covering IBM Think 2018, brought to you by IBM. >> We're back at IBM Think 2018. You're watching the Cube, the leader in live tech coverage. My name is Dave Vellante and I'm here with my co-host, Peter Burris. Ken King is here; he's the general manager of OpenPOWER from IBM, and Sumit Gupta, PhD, who is the VP, HPC, AI, ML for IBM Cognitive. Gentleman, welcome to the Cube >> Sumit: Thank you. >> Thank you for having us. >> So, really, guys, a pleasure. We had dinner last night, talked about Picciano who runs the OpenPOWER business, appreciate you guys comin' on, but, I got to ask you, Sumit, I'll start with you. OpenPOWER, Cognitive systems, a lot of people say, "Well, that's just the power system. "This is the old AIX business, it's just renaming it. "It's a branding thing.", what do you say? >> I think we had a fundamental strategy shift where we realized that AI was going to be the dominant workload moving into the future, and the systems that have been designed today or in the past are not the right systems for the AI future. So, we also believe that it's not just about silicon and even a single server. It's about the software, it's about thinking at the react level and the data center level. So, fundamentally, Cognitive Systems is about co-designing hardware and software with an open ecosystem of partners who are innovating to maximize the data and AI support at a react level. >> Somebody was talkin' to Steve Mills, probably about 10 years ago, and he said, "Listen, if you're going to compete with Intel, "you can copy them, that's not what we're going to do." You know, he didn't like the spark strategy. "We have a better strategy.", is what he said, and "Oh, strategies, we're going to open it up, "we're going to try to get 10% of the market. "You know, we'll see if we can get there.", but, Ken, I wonder if you could sort of talk about, just from a high level, the strategy and maybe go into the segments. >> Yeah, absolutely, so, yeah, you're absolutely right on the strategy. You know, we have completely opened up the architecture. Our focus on growth is around having an ecosystem and an open architecture so everybody can innovate on top of it effectively and everybody in the ecosystem can profit from it and gains good margins. So, that's the strategy, that's how we design the OpenPOWER ecosystem, but, you know, our segments, our core segments, AIX in Unix is still a core, very big core segment of ours. Unix itself is flat to declining, but AIX is continuing to take share in that segment through all the new innovations we're delivering. The other segments are all growth segments, high growth segments, whether it's SAP HANA, our cognitive infrastructure in modern day to platform, or even what we're doing in the HyperScale data centers. Those are all significant growth opportunities for us, and those are all Linux based, and, so, that is really where a lot of the OpenPOWER initiatives are driving growth for us and leveraging the fact that, through that ecosystem, we're getting a lot of incremental innovation that's occurring and it's delivering competitive differentiation for our platform. I say for our platform, but that doesn't mean just for IBM, but for all the ecosystem partners as well, and a lot of that was on display on Monday when we had our OpenPOWER summit. >> So, to talk about more about the OpenPOWER summit, what was that all about, who was there? Give us some stats on OpenPOWER and ecosystem. >> Yeah, absolutely. So, it was a good day, we're up to well over 300 members. We have over 50 different systems that are coming out in the market from IBM or our partners. Over 20 different manufacturers out there actually developing OpenPOWER systems. A lot of announcements or a lot of statements that were made at the summit that we thought were extremely valuable, first of all, we got the number one server vendor in Europe, Atos, designing and developing P9, the number on in Japan, Hitachi, the number one in China, Inspur. We got top ODMs like Super Micro, Wistron, and others that are also developing their power nine. We have a lot of different component providers on the new PCIe gen four, on the open cabinet capabilities, a lot of announcements made by a number of component partners and accelerator partners at the summit as well. The other thing I'm excited about is we have over 70 ISVs now on the platform, and a number of statements were made and announcements on Monday from people like MapD, Anaconda, H2O, Conetica and others who are leveraging those innovations bought on the platform like NVLink and the coherency between GPU and CPU to do accelerated analytics and accelerated GPU database kind of capabilities, but the thing that had me the most excited on Monday were the end users. I've always said, and the analysts always ask me the questions of when are you going to start penetration in the market? When are you going to show that you've got a lot of end users deploying this? And there were a lot of statements by a lot of big players on Monday. Google was on stage and publicly said the IO was amazing, the memory bandwidth is amazing. We are deploying Zaius, which is the power nine server, in our data centers and we're ready for scale, and it's now Google strong which is basically saying that this thing is hardened and ready for production, but we also (laughs) had a number of other significant ones, Tencent talkin' about deploying OpenPOWER, 30% better efficiency, 30% less server resources required, the cloud armor of Alibaba talkin' about how they're putting on their on their X-Dragon, they have it in a piler program, they're asking everybody to use it now so they can figure out how do they go into production. PayPal made statements about how they're using it, but the machine learning and deep learning to do fraud detection, and we even had Limelight, who is not as big a name, but >> CDN, yeah. >> They're a CDN tool provider to people like Netflix and others. We're talkin' about the great capability with the IO and the ability to reduce the buffering and improve the streaming for all these CDN providers out there. So, we were really excited about all those end users and all the things they're saying. That demonstrates the power of this ecosystem. >> Alright, so just to comment on the architecture and then, I want to get into the Cognitive piece. I mean, you guys did, years ago, little Indians, recognizing you got to get software based to be compatible. You mentioned, Ken, bandwidth, IO bandwidth, CAPI stuff that you've done. So, there's a lot of incentives, especially for the big hyperscale guys, to be able to do more with less, but, to me, let's get into the AI, the Cognitive piece. Bob Picciano comes over from running a $15 billion analytics business, so, obviously, he's got some knowledge. He's bringin' in people like you with all these cool buzzwords in your title. So, talk a little bit about infrastructure for AI and why power is the right platform. >> Sure, so, I think we all recognize that the performance advantages and even power advantages that we were getting from Dennard scaling, also known as Moore's law, is over, right. So, people talk about the end of Moore's Law, and that's really the end of gaining processor performance with Dennard scaling and the Moore's Law. What we believe is that to continue to meet the performance needs of all of these new AI and data workloads, you need accelerators, and not just computer accelerators, you actually need accelerated networking. You need accelerated storage, you need high-density memory sitting very close to the compute power, and, if you really think about it, what's happened is, again, system view, right, we're not silicon view, we're looking at the system. The minute you start looking at the silicon you realize you want to get the data to where the computer is, or the computer where the data is. So, it all becomes about creating bigger pipelines, factor of pipelines, to move data around to get to the right compute piece. For example, we put much more emphasis on a much faster memory system to make sure we are getting data from the system memory to the CPU. >> Coherently. >> Coherently, that's the main memory. We put interfaces on power nine including NVLink, OpenCAPI, and PCIe gen four, and that enabled us to get that data either from the network to the system memory, or out back to the network, or to storage, or to accelerators like GPUs. We built and embedded these high-speed interconnects into power nine, into the processor. Nvidia put NVLink into their GPU, and we've been working with marketers like Xilinx and Mellanox on getting OpenCAPI onto their components. >> And we're seeing up to 10x for both memory bandwidth and IO over x86 which is significant. You should talk about how we're seeing up to 4x improvement in training of MLDL algorithms over x86 which is dramatic in how quickly you can get from data to insight, right? You could take training and turn it from weeks to days, or days to hours, or even hours to minutes, and that makes a huge difference in what you can do in any industry as far as getting insight out of your data which is the competitive differentiator in today's environment. >> Let's talk about this notion of architecture, or systems especially. The basic platform for how we've been building systems has been relatively consistent for a long time. The basic approach to how we think about building systems has been relatively consistent. You start with the database manager, you run it on an Intel processor, you build your application, you scale it up based on SMP needs. There's been some variations; we're going into clustering, because we do some other things, but you guys are talking about something fundamentally different, and flash memory, the ability to do flash storage, which dramatically changes the relationship between the processor and the data, means that we're not going to see all of the organization of the workloads around the server, see how much we can do in it. It's really going to be much more of a balanced approach. How is power going to provide that more balanced systems approach across as we distribute data, as we distribute processing, as we create a cloud experience that isn't in one place, but is in more places. >> Well, this ties exactly to the point I made around it's not just accelerated compute, which we've all talked about a lot over the years, it's also about accelerated storage, accelerated networking, and accelerated memories, right. This is really, the point being, that the compute, if you don't have a fast pipeline into the processor from all of this wonderful storage and flash technology, there's going to be a choke point in the network, or they'll be a choke point once the data gets to the server, you're choked then. So, a lot of our focus has been, first of all, partnering with a company like Mellanox which builds extremely high bandwidth, high-speed >> And EOF. >> Right, right, and I'm using one as an example right. >> Sure. >> I'm using one as an example and that's where the large partnerships, we have like 300 partnerships, as Ken talked about in the OpenPOWER foundation. Those partnerships is because we brought together all of these technology providers. We believe that no one company can own the agenda of technology. No one company can invest enough to continue to give us the performance we need to meet the needs of the AI workloads, and that's why we want to partner with all these technology vendors who've all invested billions of dollars to provide the best systems and software for AI and data. >> But fundamentally, >> It's the whole construct of data centric systems, right? >> Right. >> I mean, sometimes you got to process the data in the network, right? Sometimes you got to process the data in the storage. It's not just at the CPU, the GPUs a huge place for processing that data. >> Sure. >> How do you do that all coherently and how do things work together in a system environment is crucial versus a vertically integrated capability where the CPU provider continues to put more and more into the processor and disenfranchise the rest of the ecosystem. >> Well, that was the counter building strategies that we want to talk about. You have Intel who wants to put as much on the die as possible. It's worked quite well for Intel over the years. You had to take a different strategy. If you tried to take Intel on with that strategy, you would have failed. So, talk about the different philosophies, but really I'm interested in what it means for things like alternative processing and your relationship in your ecosystem. >> This is not about company strategies, right. I mean, Intel is a semiconductor company and they think like a semiconductor company. We're a systems and software company, we think like that, but this is not about company strategy. This is about what the market needs, what client workloads need, and if you start there, you start with a data centric strategy. You start with data centric systems. You think about moving data around and making sure there is heritage in this computer, there is accelerated computer, you have very fast networks. So, we just built the US's fastest supercomputer. We're currently building the US's fastest supercomputer which is the project name is Coral, but there are two supercomputers, one at Oak Ridge National Labs and one at Lawrence Livermore. These are the ultimate HPC and AI machines, right. Its computer's a very important part of them, but networking and storage is just as important. The file system is just as important. The cluster management software is just as important, right, because if you are serving data scientists and a biologist, they don't want to deal with, "How many servers do I need to launch this job on? "How do I manage the jobs, how do I manage the server?" You want them to just scale, right. So, we do a lot of work on our scalability. We do a lot of work in using Apache Spark to enable cluster virtualization and user virtualization. >> Well, if we think about, I don't like the term data gravity, it's wrong a lot of different perspectives, but if we think about it, you guys are trying to build systems in a world that's centered on data, as opposed to a world that's centered on the server. >> That's exactly right. >> That's right. >> You got that, right? >> That's exactly right. >> Yeah, absolutely. >> Alright, you guys got to go, we got to wrap, but I just want to close with, I mean, always says infrastructure matters. You got Z growing, you got power growing, you got storage growing, it's given a good tailwind to IBM, so, guys, great work. Congratulations, got a lot more to do, I know, but thanks for >> It's going to be a fun year. comin' on the Cube, appreciate it. >> Thank you very much. >> Thank you. >> Appreciate you having us. >> Alright, keep it right there, everybody. We'll be back with our next guest. You're watching the Cube live from IBM Think 2018. We'll be right back. (techno beat)
SUMMARY :
covering IBM Think 2018, brought to you by IBM. Ken King is here; he's the general manager "This is the old AIX business, it's just renaming it. and the systems that have been designed today or in the past You know, he didn't like the spark strategy. So, that's the strategy, that's how we design So, to talk about more about the OpenPOWER summit, the questions of when are you going to and the ability to reduce the buffering the big hyperscale guys, to be able to do more with less, from the system memory to the CPU. Coherently, that's the main memory. and that makes a huge difference in what you can do and flash memory, the ability to do flash storage, This is really, the point being, that the compute, Right, right, and I'm using one as an example the large partnerships, we have like 300 partnerships, It's not just at the CPU, the GPUs and disenfranchise the rest of the ecosystem. So, talk about the different philosophies, "How do I manage the jobs, how do I manage the server?" but if we think about it, you guys are trying You got Z growing, you got power growing, comin' on the Cube, appreciate it. We'll be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ken King | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Steve Mills | PERSON | 0.99+ |
Ken | PERSON | 0.99+ |
Sumit | PERSON | 0.99+ |
Bob Picciano | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Monday | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
PayPal | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Sumit Gupta | PERSON | 0.99+ |
OpenPOWER | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
$15 billion | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
Conetica | ORGANIZATION | 0.99+ |
Xilinx | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
OpenPOWER | EVENT | 0.99+ |
ORGANIZATION | 0.99+ | |
Netflix | ORGANIZATION | 0.99+ |
Atos | ORGANIZATION | 0.99+ |
Picciano | PERSON | 0.99+ |
300 partnerships | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
Inspur | ORGANIZATION | 0.98+ |
two supercomputers | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
Moore's Law | TITLE | 0.98+ |
over 300 members | QUANTITY | 0.98+ |
US | LOCATION | 0.98+ |
SAP HANA | TITLE | 0.97+ |
AIX | ORGANIZATION | 0.97+ |
over 50 different systems | QUANTITY | 0.97+ |
Wistron | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
Limelight | ORGANIZATION | 0.97+ |
H2O | ORGANIZATION | 0.97+ |
Unix | TITLE | 0.97+ |
over 70 ISVs | QUANTITY | 0.97+ |
Over 20 different manufacturers | QUANTITY | 0.97+ |
billions of dollars | QUANTITY | 0.96+ |
MapD | ORGANIZATION | 0.96+ |
Dennard | ORGANIZATION | 0.95+ |
OpenCAPI | TITLE | 0.95+ |
Moore's law | TITLE | 0.95+ |
today | DATE | 0.95+ |
single server | QUANTITY | 0.94+ |
Lawrence | LOCATION | 0.93+ |
Oak Ridge National Labs | ORGANIZATION | 0.93+ |
IBM Cognitive | ORGANIZATION | 0.93+ |
Tencent | ORGANIZATION | 0.93+ |
nine | QUANTITY | 0.92+ |
one place | QUANTITY | 0.91+ |
up to 10x | QUANTITY | 0.9+ |
X-Dragon | COMMERCIAL_ITEM | 0.9+ |
30% less | QUANTITY | 0.9+ |
P9 | COMMERCIAL_ITEM | 0.89+ |
last night | DATE | 0.88+ |
Coral | ORGANIZATION | 0.88+ |
AIX | TITLE | 0.87+ |
Cognitive Systems | ORGANIZATION | 0.86+ |