Sumit Puri, Liqid | CUBEConversation, March 2019
(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We're at our Palo Alto studios having a CUBE Conversation, we're just about ready for the madness of the conference season to start in a few months, so it's nice to have some time to have things a little calmer in the studio, and we're excited to have a new company, I guess they're not that new, but they're relatively new, they've been working on a really interesting technology around infrastructure, and we welcome to the studio, first time, I think, Sumit Puri, CEO and co-founder of Liqid, welcome. >> Thank you guys, very very happy to be here. >> And joined by our big brain, David Floyer, of course, the CTO and co-founder of Wikibon and knows all things infrastructure. Dave, always good to see you. >> It's so good to see you. >> All right, so let's jump into this, Sumit, give us the basic overview of Liqid, what are you guys all about, little bit of the company background, how long you've been around. No, absolutely, absolutely, Liqid is a software-defined infrastructure company, the technology that we've developed is referred to as composable infrastructure, think, dynamic infrastructure, and what we do, is we go and we turn data center resources from statically-configured boxes to dynamic, agile infrastructure. Our core technology is two-part. Number 1, we have a fabric layer, that allows you to interconnect off-the-shelf hardware, but more importantly, we have a software layer, that allows you to orchestrate, or dynamically configure servers, at the bare metal. >> So, who are you selling these solutions to? What's your market, what's the business case for this solution? >> Absolutely, so first, I guess, let me explain a little bit about what we mean by composable infrastructure. Rather than building servers by plugging devices into the sockets of the motherboard, with composability it's all about pools, or trays, of resources. A tray of CPUs, a tray of SSDs, a tray of GPUs, a tray of networking devices, instead of plugging those into a motherboard, we connect those into a fabric switch, and then we come in with our software, and we orchestrate, or recompose, at the bare metal. Grab this CPU, grab those four SSDs, these eight GPUs, and build me a server, just like you were plugging devices into the motherboard, except you're defining it in software, on the other side, you're getting delivered infrastructure of any size, shape, or ratio that you want. Except that infrastructure is dynamic, when we need another GPU in our server, we don't send a guy with a cart to plug the device in, we reprogram the fabric and add or remove devices as required by the application. We give you all the flexibility that you would get from public cloud, on the infrastructure that you are forced to own. And now, to answer your question of where we find a natural fit for our solution, one primary area is obviously cloud. If you're building a cloud environment, whether you're providing cloud as a service or whether you're providing cloud to your internal customers, building a more dynamic, agile cloud is what we enable. >> So, is the use case more just to use your available resources and reconfigure it to set something that basically runs that way for a while, or are customers more using it to dynamically reconfigure those resources based on, say, a temporary workload, is kind of a classic cloud example, where you need a bunch of something now, but not necessarily forever. >> Sure. The way we look at the world is very much around resource utilization. I'm buying this very expensive hardware, I'm deploying it into my data center, typical resource utilization is very low, below 20%, right? So what we enable is the ability to get better resource utilization out of the hardware that you're deploying inside your data center. If we can take a resource that's utilized 20% of the time because it's deployed as a static element inside of a box and we can raise the utilization to 40%, does that mean we are buying less hardware inside of our data center? Our argument is yes, if we can take rack scale efficiency from 20% to 40%, our belief is we can do the same amount of work with less hardware. >> So it's a fairly simple business case, then. To do that. So who are your competition in this area? Is it people like HP or Intel, or, >> That's a great question, I think both of those are interesting companies, I think HPE is the 800-pound gorilla in this term called composability and we find ourselves a slightly different approach than the way that those guys take it, I think first and foremost, the way that we're different is because we're disaggregated, right? When we sell you trays of resources, we'll sell you a tray of SSD or a tray of GPUs, where HP takes a converged solution, right? Every time I'm buying resources for my composable rack, I'm paying for CPUs, SSDs, GPUs, all of those devices as a converged resource, so they are converged, we are disaggregated. We are bare metal, we have a PCIe-based fabric up and down the rack, they are an ethernet-based fabric, there are no ethernet SSDs, there are no ethernet GPUs, at least today, so by using ethernet as your fabric, they're forced to do virtualization protocol translation, so they are not truly bare metal. We are bare metal, we view of them more as a virtualized solution. We're an open ecosystem, we're hardware-agnostic, right? We allow our customers to use whatever hardware that they're using in their environment today. Once you've kind of gone down that HP route, it's very much a closed environment. >> So what about some of the customers that you've got? Which sort of industries, which sort of customers, I presume this is for the larger types of customers, in general, but say a little bit about where you're making a difference. >> No, absolutely, right? So, obviously at scale, composability has even more benefit than in smaller deployments, I'll give you just a couple of use case examples. Number one, we're working with a transportation company, and what happens with them at 5 p.m. is actually very different than what happens at 2 a.m., and the model that they have today is a bunch of static boxes and they're playing a game of workload matching. If the workload that comes in fits the appropriate box, then the world is good. If the workload that comes in ends up on a machine that's oversized, then resources are being wasted, and what they said was, "We want to take a new approach. "We want to study the workload as it comes in, "dynamically spin up small, medium, large, "depending on what that workload requires, "and as soon as that workload is done, "free the resources back into the general pool." Right, so that's one customer, by taking a dynamic approach, they're changing the TCO argument inside of their environment. And for them, it's not a matter of am I going dynamic or am I going static, everyone knows dynamic infrastructure is better, no one says, "Give me the static stuff." For them, it's am I going public cloud, or am I going on prem. That's really the question, so what we provide is public cloud is very easy, but when you start thinking about next-generation workloads, things that leverage GPUs and FPGAs, those instantiations on public cloud are just not very cheap. So we give you all of that flexibility that you're getting on public cloud, but we save you money by giving you that capability on prem. So that's use case number one. Another use case is very exciting for us, we're working with a studio down in southern California, and they leverage these NVIDIA V100 GPUs. During the daytime, they give those GPUs to their AI engineers, when the AI engineers go home at night, they reprogram the fabric and they use those same GPUs for rendering workloads. They've taken $50,000 worth of hardware and they've doubled the utilization of that hardware. >> The other use case we talked about before we turned the cameras on there, was pretty interesting, was kind of multiple workloads against the same data set, over a series of time where you want to apply different resources. I wonder if you can unpack that a little bit because I think that's a really interesting one that we don't hear a lot about. So, we would say about 60 plus to 70% of our deployments in one way or another touch the realm of AI. AI is actually not an event, AI is a workflow, what do we do? First we ingest data, that's very networking-centric. Then we scrub and we clean the data, that's actually CPU-centric. Then we're running inference, and then we're running training, that's GPU-centric. Data has gravity, right? It's very difficult to move petabytes of data around, so what we enable is the composable AI platform, leave data at the center of the universe, reorchestrate your compute, networking, GPU resources around the data. That's the way that we believe that AI is approached. >> So we're looking forward in the future. What are you seeing where you can make a difference in this? I mean, a lot of changes happening, there's Gen 4 coming out in PCIe, there's GPUs which are moving down to the edge, how do see, where do you see you're going to make a difference, over the next few years. >> That's a great question. So I think there's 2 parts to look at, right? Number one is the physical layer, right? Today we build or we compose based upon PCIe Gen 3 because for the first time in the data center, everything is speaking a common language. When SSDs moved to NVMe, you had SSDs, network cards, GPUs, CPUs, all speaking a common language which was PCIe. So that's why we've chosen to build our fabric on this common interconnect, because that's how we enable bare metal orchestration without translation and virtualization, right? Today, it's PCIe Gen 3, as the industry moves forward, Gen 4 is coming. Gen 4 is here. We've actually announced our first PCIe Gen 4 products already, and by the end of this year, Gen 4 will become extremely relevant into the market. Our software has been architected from the beginning to be physical layer-agnostic, so whether we're talking PCIe Gen 3, PCIe Gen 4, in the future something referred to as Gen Z, (laughing) it doesn't matter for us, we will support all of those physical layers. For us it's about the software orchestration. >> I would imagine, too, like TPUs and other physical units that are going to be introduced in the system, too, you're architected to be able to take those, new-- >> Today, today we're doing CPUs, GPUs, NVMe devices and we're doing NICs. We just made an announcement, now we're orchestrating Optane memory with Intel. We've made an announcement with Xilinx where we're orchestrating FPGAs with Xilinx. So this will continue, we'll continue to find more and more of the resources that we'll be able to orchestrate for a very simple reason, everything has a common interconnect, and that common interconnect is PCIe. >> So this is an exciting time in your existence. Where are you? I mean, how far along are you to becoming the standard in this industry? >> Yeah, no, that's a great question, and I think, we get asked a lot is what company are you most similar to or are you most like at the early stage. And what we say is we, a lot of time, compare ourselves to VMware, right? VMware is the hypervisor for the virtualization layer. We view ourselves as that physical hypervisor, right? We do for physical infrastructure what VMware is doing for virtualized environments. And just like VMware has enabled many of the market players to get virtualized, our hope is we're going to enable many of the market players to become composable. We're very excited about our partnership with Inspur, just recently we've announced, they're the number three server vendor in the world, we've announced an AI-centric rack, which leverages the servers and the storage solutions from Inspur tied to our fabric to deliver a composable AI platform. >> That's great. >> Yeah, and it seems like the market for cloud service providers, 'cause we always talk about the big ones, but there's a lot of them, all over the world, is a perfect use case for you, because now they can actually offer the benefits of cloud flexibility by leveraging your infrastructure to get more miles out of their investments into their backend. >> Absolutely, cloud, cloud service providers, and private cloud, that's a big market and opportunity for us, and we're not necessarily chasing the big seven hyperscalers, right? We'd love to partner with them, but for us, there's 300 other companies out there that can use the benefit of our technology. So they necessarily don't have the R&D dollars available that some of the big guys have, so we come in with our technology and we enable those cloud service providers to be more agile, to be more competitive. >> All right, Sumit, before we let you go, season's coming up, we were just at RSA yesterday, big shows comin' up in May, where you guys, are we going to cross paths over the next several weeks or months? >> No, absolutely, we got a handful of shows coming up, very exciting season for us, we're going to be at the OCP, the Open Compute Project conference, actually next week, and then right after that, we're going to be at the NVIDIA GPU Technology Conference, we're going to have a booth at both of those shows, and we're going to be doing live demos of our composable platform, and then at the end of April, we're going to be at the Dell Technology World conference in Las Vegas, where we're going to have a large booth and we're going to be doing some very exciting demos with the Dell team. >> Sumit, thanks for taking a few minutes out of your day to tell us a story, it's pretty exciting stuff, 'cause this whole flexibility is such an important piece of the whole cloud value proposition, and you guys are delivering it all over the place. >> Well, thank you guys for making the time today, I was excited to be here, thank you. >> All right, David, always good to see you, >> Good to see you. >> Smart man, alright, I'm Jeff Frick, you're watching theCUBE from theCUBE studios in Palo Alto, thanks for watching, we'll see you next time. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, of the conference season to start in a few months, of course, the CTO and co-founder of Wikibon little bit of the company background, and then we come in with our software, So, is the use case more just to use from 20% to 40%, our belief is we can do So who are your competition in this area? When we sell you trays of resources, So what about some of the customers that you've got? So we give you all of that flexibility That's the way that we believe that AI is approached. how do see, where do you see you're going to make a difference, and by the end of this year, of the resources that we'll be able to orchestrate I mean, how far along are you many of the market players to become composable. the benefits of cloud flexibility that some of the big guys have, so we come in and then right after that, we're going to be at of the whole cloud value proposition, Well, thank you guys for making the time today, thanks for watching, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
$50,000 | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Sumit Puri | PERSON | 0.99+ |
2 a.m. | DATE | 0.99+ |
40% | QUANTITY | 0.99+ |
2 parts | QUANTITY | 0.99+ |
Sumit | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Inspur | ORGANIZATION | 0.99+ |
5 p.m. | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
March 2019 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
800-pound | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
May | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Liqid | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
two-part | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Liqid | PERSON | 0.98+ |
NVIDIA | ORGANIZATION | 0.98+ |
Today | DATE | 0.98+ |
300 other companies | QUANTITY | 0.98+ |
Xilinx | ORGANIZATION | 0.98+ |
70% | QUANTITY | 0.97+ |
southern California | LOCATION | 0.96+ |
Dell Technology World | EVENT | 0.96+ |
one customer | QUANTITY | 0.95+ |
end of April | DATE | 0.95+ |
end of this year | DATE | 0.95+ |
Open Compute Project conference | EVENT | 0.95+ |
CUBE Conversation | EVENT | 0.95+ |
V100 | COMMERCIAL_ITEM | 0.93+ |
NVIDIA GPU Technology Conference | EVENT | 0.93+ |
about 60 plus | QUANTITY | 0.93+ |
below 20% | QUANTITY | 0.93+ |
OCP | EVENT | 0.92+ |
VMware | ORGANIZATION | 0.91+ |
four | QUANTITY | 0.91+ |
Palo Alto, California | LOCATION | 0.89+ |
Silicon Valley, | LOCATION | 0.88+ |
eight GPUs | QUANTITY | 0.88+ |
one way | QUANTITY | 0.86+ |
seven hyperscalers | QUANTITY | 0.86+ |
petabytes | QUANTITY | 0.86+ |
PCIe Gen 3 | OTHER | 0.85+ |
Gen Z | OTHER | 0.8+ |
Gen 4 | OTHER | 0.79+ |
next several weeks | DATE | 0.76+ |
PCIe Gen 4 | COMMERCIAL_ITEM | 0.74+ |
handful of shows | QUANTITY | 0.74+ |
three server vendor | QUANTITY | 0.73+ |
years | DATE | 0.69+ |
case | QUANTITY | 0.69+ |
one primary area | QUANTITY | 0.69+ |
one | QUANTITY | 0.68+ |
VMware | TITLE | 0.67+ |
Number 1 | QUANTITY | 0.67+ |
theCUBE | ORGANIZATION | 0.64+ |
PCIe Gen 4 | OTHER | 0.61+ |