Yaron Haviv, Iguazio | KubeCon + CloudNativeCon NA 2019
>>Live from San Diego, California at the cube covering to clock in cloud native con brought to you by red hat, the cloud native computing foundation and its ecosystem Marsh. >>Welcome back. This is the cubes coverage of CubeCon cloud date of con 2019 in San Diego, 12,000 in attendance. I'm just two minute and my cohost is John trier. And welcome back to the program. A multi-time cube alumni. You're on Aviv, who is the CTO and cofounder of a Gwoza. We've had quite a lot of, you know, founders, CTOs, you know, their big brains at this show, your own. So you know, let, let, let's start, you know, there's, there's really a gathering, uh, there's a lot of effort building out, you know, a very complicated ecosystem. Give us first, kind of your overall impressions of the show in this ecosystem. Yeah, so we're very early on on Desecco system. We were one of the first in the first batch of CNCF members when there were a few dozens of those. Not like a thousand of those. Uh, so I've been, I've been to all those shows. >>Uh, we're part of the CNCF committees for different things. And any initiating, I think this has become much more mainstream. I told you before, it's sort of the new van world. You know, I lot a lot more, uh, all day infrastructure vendors along with middleware and application vendor are coming here. All right, so, so one of the things we like having you on the program you're on is you don't pull any punches. So we've seen certain waves of technology come with big promise and fall short, you know, big data was going to allow us to leverage everything and you know, large percentage of, uh, solutions, you know, had to stop or be pulled back. Um, give us, what's the cautionary tale that we should learn and make sure that we don't repeat, you know, so I've been a CTO for many years in different companies and, and what everyone used to say about it, I'm always right. >>I'm only one year off usually. I'm usually a little more optimistic. So, you know, we've been talking about Cloudera and Hadoop world sort of going down and Kubernetes and cloud services, essentially replacing them. We were talking about it four years ago and what do you see that's actually happening? You know, with the collapse of my par and whore, then we're going to Cloudera things are going down, customer now Denon guys, we need equivalent solution for Kubernetes. We're not going to maintain two clusters. So I think in general we've been, uh, picking on many of those friends. We've, we've invented serverless before it was even called serverless with, with nuclear and now we're expanding it further and now we see the new emerging trends really around machine learning and AI. That's sort of the big thing. I'm surprised, you know, that's our space where essentially you're doing a data science platform as a service fully automated around serverless constructs so people can, can develop things really, really quickly. >>And what I see that, you know, third of the people I talk to are, have some relations to machine learning and AI. Yeah. Maybe explain that for our audience a little bit. Because when, you know, Kubernetes first started very much an infrastructure discussion, but the last year or two, uh, very much application specific, we hear many people talking about those data use cases, AI and ML early days. But you know how, how does that fit into the overall? It's simple. You know there, if you're moving to the cloud are two workloads. There is lift and shift workloads and there are new workloads. Okay, lift and ship. Why? Why bother moving them to Kubernetes? Okay, so you end up with new workloads. Everyone is trying to be cloud native server, elastic services and all that. Everyone has to feed data and machine learning into those new applications. This is why you see those trends that talk about old data integration, various frameworks and all that in that space. >>So I don't think it's by coincidence. I think it's, that's because new applications incorporate the intelligence. That's why you hear a lot of the talk about those things. What I loved about the architecture, what you just said is like people don't want to run into another cluster. I don't want to run two versions of Kubernetes, you know, if I'm moving there you, because you, but you're still built on that, that kind of infrastructure framework and, and knowledge of, of how to do serverless and how to make more nodes and fewer nodes and persistent storage and all that sort of good stuff and uh, and, and run TensorFlow and run, you know, all these, all these big data apps. But you can, um, you can talk about that just as a, as a, the advantage to your customer cause you could, it seems like you could, you could run it on top of GKE. >>You could run it on prem. I could run my own Coobernetti's you could, you could just give me a, uh, so >> we, we say Kubernetes is not interesting. I didn't know. I don't want anyone to get offended. Okay. But Kubernetes is not the big deal. The big deal is organizations want to be competitive in this sort of digital world. They need to build new applications. Old ones are sort of in sort of a maintenance mode. And the big point is about delivering new application with elastic scaling because your, your customers may, may be a million people behind some sort of, uh, you know, uh, app. Okay. Um, so that's the key thing and Kubernetes is a way to deliver those microservices. But what we figured out, it's still very complicated for people. Okay. Especially in, in the data science work. Uh, he takes him a few weeks to deliver a model on a Jupiter notebook, whatever. >>And then productizing it is about the year. That's something we've seen between six months to a year to productize things that are relatively simple. Okay. And that's because people think about the container, the TensorFlow, the Kuda driver, whatever, how to scale it, how to make it perform, et cetera. So let's, we came up with is traditionally there's a notion of serverless, which is abstraction with very slow performance, very limited set of use cases. We sell services about elastic scaling paper, use, full automation around dev ops and all that. Okay. Why cannot apply to other use cases are really high concurrency, high-speed batch, no distributed training, distributed workload. Because we're coming, if you know my background, you know, been beeping in Mellanox and other high-performance companies. So where I have a, we have a high performance DNA so we don't know how to build things are extremely slow. >>It sort of irritates me. So the point is that how can we apply this notion of abstraction and scaling and all that to variety of workloads and this is essentially what it was. It is a combination of high speed data technology for like, you know, moving data around on between those function and extremely high speed set though functions that work on the different domains of data collection and ingestion, data analytics, you know, machine learning, training and CIN learning model serving. So a customer can come on on our platform and we have testimonials around that, that you know, things that they thought about building on Amazon or even on prem for months and months. They'd built in our platform in few weeks with fewer people because the focus is on building the application. The focus is not about joining your Kubernetes. Now we go to customers, some of them are large banks, et cetera. >>They say, Alrighty, likes Kubernetes, we have our own Kubernetes. So you know what, we don't butter. Initially we, we used to bring our own Kubernetes, but then you know, I don't mind, you know, we do struggle sometimes because our level of expertise in Coobernetti's is way more sophisticated than what they have to say. Okay, we've installed Kubernetes and we come with our software stack. No you didn't, you know, you didn't configure the security, they didn't configure ingress, et cetera. So sometimes it's easier for us to bring, but we don't want him to get into this sort of tension with it. Our focus is to accelerate development on the new application that are intelligent, you know, move applications from, if you think of the traditional data analytics and data science, it's about reporting and what people want to do. And some applications we've announced this week and application around real time cyber collection, it's being used in some different governments is that you can collect a lot of information, SMS, telephony, video, et cetera. >>And in real time you could detect terrorists. Okay. So those application requires high concurrency always on rolling upgrades, things that weren't there in the traditional BI, Oracle, you know, kind of reporting. So you have this wave of putting intelligence into more highly concurrent online application. It requires all the dev ops sort of aspects, but all the data analytics and machine learning aspects to to come to come along. Alright. So speaking of those workloads for, for machine learning, uh, cube flow is a project, uh, moving the, moving in that space along it. Give us the update there. Yeah. So, so there is sort of a rising star in the Kubernetes community around how to automate machine learning workflows. That's cube flow. Uh, I'm personally, I one of the committers and killed flow and what we've done, because it's very complicated cause Google developed the cube cube flow as one of the services on, on a GKE. >>Okay. And the tweaked everything. It works great in GK, even that it's relatively new technology and people want to move around it in a more generic. So one of the things in our platform is a managed cube flow that works natively with all the rest of the solutions. And other thing that we've done is we make it, we made it fully. So instead of queue flow approach is very con, you know, Kubernetes oriented containers, the ammos, all that. Uh, in our flavor of Coupa we can just create function and you just like chain functions and you click and it runs. Just, you've mentioned a couple of times, uh, how does serverless, as you defined it, fit in with, uh, Coobernetti's? Is that working together just functions on top or I'm just trying to make here, >> you'll, you'll hear different things. I think when most people say serverless, they mean sort of front end application things that are served low concurrency, a Terra, you know, uh, when we mean serverless, it's, we have eight different engines that each one is very good in, in different, uh, domain like distributed deep learning, you know, distributed machine learning, et cetera. >>And we know how to fit the thing into any workloads. So for me, uh, we deliver the elastic scaling, the paper use and the ease of use of sort of no dev ops across all the eight workloads that we're addressing. For most people it's like a single Dreek phony. And I think really that the future is, is moving to that. And if you think about serverless, there's another aspect here which is very important for machine learning and Israel's ability. I'm not going to develop any algorithm in the world. Okay. There are a bunch of companies or users or developers that can develop an algorithm and I can just consume it. So the future in data science but not just data science is essentially to have like marketplaces of algorithms premade or analytic tools or maybe even vendors licensing their technology through sort of prepackaged solution. >>So we're a great believer of forget about the infrastructure, focus on the business components and Daisy chain them in to a pipeline like UFO pipeline and run them. And that will allow you most reusability that, you know, lowest amount of cost, best performance, et cetera. That's great. I just want to double click on the serverless idea one more time, but, so you're, you're developing, it's an architectural pattern, uh, and you're developing these concepts yourself. You're not actually, sometimes the concept gets confused with the implementations of other people's serverless frameworks or things like that. Is that, is that correct? I think there are confusion. I'm getting asked a lot of times. How do you compare your technology compared to let's say a? You've heard the term gay native is just a technology or open FAS or, yeah. Hold on. Pfizer's a CGIs or Alito. An open community is very nice for hobbies, but if you're an enterprise and it's security, Eldep integration, authentication for anything, you need DUIs, you need CLI, you need all of those things. >>So Amazon provides that with Lambda. Can you compare Lambda to K native? No. Okay. Native is, I need to go from get and build and all that. Serverless is about taking a function and clicking and deploying. It's not about building. And the problem is that this conference is about people, it people in crowd for people who like to build. So they, they don't like to get something that work. They want to get the build the Lego building blocks so they can play. So in our view, serverless is not open FAS or K native. Okay. It's something that you click and it works and have all the enterprise set of features. We've extended it to different levels of magnitude of performance. I'll give you an anecdote. I did a comparison for our customer asking me the same question, not about Canadian, but this time Lambda. How do you guys compare with London? >>Know Nokia is extremely high performance. You know we are doing up to 400,000 events on a single process and the customer said, you know what, I have a use case. I need like 5,000 events per second. How do you guys compare a total across all my functions? How do you compare against Lambda? We went into, you know the price calculator, 5,000 events per second on Lambda. That's $50,000 okay. $50,000 we do about, let's say even in simple function, 60,000 per process, $500 VM on Amazon, $500 VM on Amazon with our technology stick, 2000 transactions per second, 5,000 events per second on Lambda. That's 50,000. Okay. 100 times more expensive. So it depends on the design point. We designed our solution to be extremely efficient, high concurrency. If you just need something to do a web hook, use Lambda, you know, if you are trying to build a high concurrency application efficient, you know, an enterprise application on it, on a serverless architecture construct come to us. >>Yeah. So, so just a, I'll pause at this for you because a, it reminds me what you were talking about about the builders here in the early days of VMware to get it to work the way I wanted to. People need to participate and build it and there's the Ikea effect. If I actually helped build it a little bit, I like it more to get to the vast majority, uh, to uh, adopt those things. It needs to become simplified and I can't have, you know, all the applications move over to this environment if I have to constantly tweak that. Everything. So that's the trend we've been really seeing this year is some of that simplification needs to get there. There's focus on, you know, the operators, the day two operations, the applications so that anybody can get there without having to build themselves. So we know there's still work to be done. >>Um, but if we've crossed the chasm and we want the majority to now adopt this, it can't be that I have to customize it. It needs to be more turnkey. Yeah. And I think it's a friendly and attitude between what you'll see in Amazon reinvent in couple of weeks. And then what you see here, because there is those, the focus of we're building application a what kind of tools and the Jess is gonna just launch today on the, on the floor. Okay. So we can just consume it and build our new application. They're not thinking, how did Andy just, he built his tools. Okay. And I think that's the opposite here is like how can you know Ali's is still working inside underneath dude who cares about his team. You know, you care about having connectivity between two points and and all that. How do you implement it that, you know, let someone else take care of it and then you can apply your few people that you have on solving your business problem, not on infrastructure. >>You know, I just met a guy, came to our booth, we've seen our demo. Pretty impressive how we rise people function and need scales and does everything automatically said we want to build something like you're doing, you know, not really like only 10% of what you just showed me. And we have about six people and for three months where it just like scratching our head. I said, okay, you can use our platform, pay us some software license and now you'll get, you know, 10 times more functionality and your six people can do something more useful. Says right, let's do a POC. So, so that's our intention and I think people are starting to get it because Kubernetes is not easy. Again, people tell me we installed Kubernete is now installed your stack and then they haven't installed like 20% of all the things that you need to stop so well your own have Eve always pleasure to catch up with you. Thanks for the all the updates and I know we'll catch up with you again soon. Sure. All right. For John Troyer, I'm Stu Miniman. We'll be back with more coverage here from CubeCon cloud date of con in San Diego. Thanks for watching the cube.
SUMMARY :
clock in cloud native con brought to you by red hat, the cloud native computing foundation So you know, All right, so, so one of the things we like having you on the program you're on is you don't pull any punches. I'm surprised, you know, that's our space where essentially you're doing a data science platform as a service And what I see that, you know, third of the people I talk to are, have some relations to machine learning you know, if I'm moving there you, because you, but you're still built on that, that kind of infrastructure I could run my own Coobernetti's you could, you could just give me a, uh, so sort of, uh, you know, uh, app. Because we're coming, if you know my background, you know, been beeping in Mellanox and other high-performance companies. and we have testimonials around that, that you know, things that they thought about building on Amazon or even I don't mind, you know, we do struggle sometimes because our level of expertise in Coobernetti's is Oracle, you know, kind of reporting. you know, Kubernetes oriented containers, the ammos, all that. in different, uh, domain like distributed deep learning, you know, distributed machine learning, And if you think about serverless, most reusability that, you know, lowest amount of cost, best performance, It's something that you click and it works and have all the enterprise set of features. a web hook, use Lambda, you know, if you are trying to build a high concurrency application you know, all the applications move over to this environment if I have to constantly tweak that. And I think that's the opposite here is like how can you know Ali's is still working inside I said, okay, you can use our platform, pay us some software license and now you'll get, you know,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
$50,000 | QUANTITY | 0.99+ |
John Troyer | PERSON | 0.99+ |
John trier | PERSON | 0.99+ |
$500 | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
two points | QUANTITY | 0.99+ |
San Diego | LOCATION | 0.99+ |
50,000 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
six months | QUANTITY | 0.99+ |
six people | QUANTITY | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
two minute | QUANTITY | 0.99+ |
Kubernete | TITLE | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
100 times | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Lambda | TITLE | 0.99+ |
Iguazio | PERSON | 0.99+ |
one year | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Pfizer | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
four years ago | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
two clusters | QUANTITY | 0.98+ |
12,000 | QUANTITY | 0.98+ |
KubeCon | EVENT | 0.98+ |
CubeCon | EVENT | 0.98+ |
Jess | PERSON | 0.97+ |
a year | QUANTITY | 0.97+ |
Lego | ORGANIZATION | 0.97+ |
last year | DATE | 0.97+ |
CloudNativeCon | EVENT | 0.97+ |
first batch | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
Desecco | ORGANIZATION | 0.96+ |
weeks | QUANTITY | 0.96+ |
5,000 events per second | QUANTITY | 0.96+ |
Ali | PERSON | 0.96+ |
two versions | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
two workloads | QUANTITY | 0.95+ |
10% | QUANTITY | 0.95+ |
two | QUANTITY | 0.94+ |
Mellanox | ORGANIZATION | 0.94+ |
dozens | QUANTITY | 0.94+ |
Gwoza | ORGANIZATION | 0.94+ |
5,000 events per second | QUANTITY | 0.94+ |
single | QUANTITY | 0.93+ |
third | QUANTITY | 0.93+ |
up to 400,000 events | QUANTITY | 0.93+ |
60,000 per process | QUANTITY | 0.92+ |
this year | DATE | 0.91+ |
this week | DATE | 0.91+ |
a million people | QUANTITY | 0.9+ |
Eve | PERSON | 0.9+ |
5,000 events per second | QUANTITY | 0.9+ |
Denon | ORGANIZATION | 0.89+ |
2000 transactions per second | QUANTITY | 0.88+ |
Alito | ORGANIZATION | 0.87+ |
Aviv | PERSON | 0.85+ |
about six people | QUANTITY | 0.85+ |
Coobernetti | ORGANIZATION | 0.85+ |
eight workloads | QUANTITY | 0.84+ |
red hat | ORGANIZATION | 0.83+ |
Hadoop | TITLE | 0.82+ |
Cloudera | ORGANIZATION | 0.81+ |
thousand | QUANTITY | 0.79+ |
Canadian | LOCATION | 0.79+ |