Richard Hartmann, Grafana Labs | KubeCon + CloudNativeCon NA 2022
>>Good afternoon everyone, and welcome back to the Cube. I am Savannah Peterson here, coming to you from Detroit, Michigan. We're at Cuban Day three. Such a series of exciting interviews. We've done over 30, but this conversation is gonna be extra special, don't you think, John? >>Yeah, this is gonna be a good one. Griffon Labs is here with us. We're getting the conversation of what's going on in the industry management, watching the Kubernetes clusters. This is large scale conversations this week. It's gonna be a good one. >>Yeah. Yeah. I'm very excited. He's also got a fantastic Twitter handle, twitchy. H Please welcome Richie Hartman, who is the director of community here at Griffon. Richie, thank you so much for joining us. Thanks >>For having me. >>How's the show been for you? >>Busy. I, I mean, I, I, >>In >>A word, I have a ton of talks at at like maintain a thing and like the covering board searches at the TLC panel. I run forme day. So it's, it's been busy. It, yeah. Monday, I didn't have to run anything. That was quite nice. But there >>You, you have your hands in a lot. I'm not even gonna cover it. Looking at your bio, there's, there's so many different things that you're working on. I know that Grafana specifically had some announcements this week. Yeah, >>Yeah, yeah. We had quite a few, like the, the two largest ones is a, we now have a field Kubernetes integration on Grafana Cloud. So our, our approach is generally extremely open source first. So we try to push stuff into the exporters, like into the open source exporters, into mixes into things which are out there as open source for anyone to use. But that's little bit like a tool set, not a ready made solution. So when we talk integrations, we actually talk about things where you get this like one click experience, You log into your Grafana cloud, you click, I have a Kubernetes, which probably most of us have, and things just work like you in just the data. You have to write dashboards, you have to write alerts, you have to write everything to just get started with extremely opinionated dashboards, SLOs, alerts, again, all those things made by experts, so anyone can use them. And you don't have to reinvent the view for every single user. So that's the one. The other is, >>It's a big deal. >>Oh yeah, it is. Yeah. It is. It, we, we has, its heavily in integrations course. While, I mean, I don't have to convince anyone that perme is a DD factor standard in everything. Cloudnative. But again, it's, it's, it's sometimes a little bit hard to handle or a little bit not easy to get into. So, so smoothing this, this, this path onto onboarding yourself onto this stack and onto those types of solutions. Yes. Is what a lot of people need. Course, if you, if you look at the statistics from coupon, and we just heard this in the governing board session yesterday. Yeah. Like 60% of the people here are first time attendees. So there's a lot of people who just come into this thing and who need, like, this is your path. This is where you should be going. Or at least if you want to go, go there. This is how to get there. >>Here's your runway for takeoff. Yes. Yeah. I think that's a really good point. And I love that you, you had those numbers. I was curious. I, I had seen on Twitter, speaking of Twitter, I had seen, I had seen that, that there were a lot of people here coming for the first time. You're a community guy. Are we at an inflection point where this community is about to continue to scale? >>That's a very good question. Which I can't really answer. So I mean, >>Obviously I bet you're gonna try. >>I covid changed a few things. Yeah. Probably most people, >>A couple things. I mean, you know, casually, it's like such a gentle way of putting that, that was >>Beautiful. I'm gonna say yes, just to explode. All these new ERs are gonna learn Prometheus. They're gonna roll in with a open, open metrics, open telemetry. I love it, >>You know, But, but at the same time, like Cuban is, is ramping back up. But if you look at the, if you look at the registration numbers between Valencia Andro, it was more or less the same. Interesting. Which, so it didn't go onto this, onto this flu trajectory, which it was on like, up to, up to 2019. I expect this to take up again. But also with the economic situation, everything, I, I don't think >>It's, I think the jury's still out on hybrid. I think there's a lot, lot more hybrid. Let's see how the projects are gonna go. That's what I think it's gonna be the tell sign. How many people are in participating? How are the project's advancing? Some of the momentum, >>I mean, from the project level, Most of this is online anyway. Of course. That's how open source, right. I've been working for >>Ages. That's >>Cause you don't have any trouble budget or, or any office or, It's >>Always been that way. >>Yeah, precisely. So the projects are arguably spearheading this, this development and the, the online numbers. I I, I have some numbers in my head, but I'm, I'm not a hundred percent certain to, but they're higher for this time in Detroit than in volunteer as far somewhere. Cool. So that is growing and it's grown in parallel, which also is great. Cause it's much more accessible, much more inclusive. You don't have to have a budget of at least, let's say, I don't know, two to five k to, to fly over the pond and, and attend this thing. You can just do it from your home. So that is, that's a lot more inclusive. And I expect this to, to basically be a second more or less orthogonal growth, growth path. But the best thing about coupon is the hallway track. I'm just meeting people, talking to people and that kind of thing is not really possible with, >>It's, it's great to see people >>In person. No, and it makes such a difference. I mean, yeah. Even and interviewing people in person too. I mean, it does a, it's, it's, and, and this, this whole, I mean cncf, this whole community, every company here is community first. It's how these projects come to be. I think it's awesome. I feel like you got something you're saying to say, Johnny. >>Yeah. And I love some of the advancements. Rich Richie, we talked last time about, you know, open telemetry, open metrics. You're involved in dashboards. Yeah. One of the themes here is ease of use, simplicity, developer productivity. Where do you see the ease of use going from a project standpoint? For me, as you mentions everywhere, it's pretty much, it is, it's almost all corners of the world. Yep. And new people coming in. How, how are you making it easier? What's going on? Give us the update on that. >>So we also, funnily enough at precisely this topic in the TC panel just a few hours ago, about ease of use and about how to, how to make things easier to, to handle how developers currently, like if they just want to get into the cloud native seen, they have like, like we, we did some neck and math, like maybe 10 tools at least, which you have to be somewhat proficient in to just get started, which is honestly horrendous. Yeah. Course. Like with a server, I just had my survey install my thing and it runs, maybe I need a database, but that's roughly it. And this needs to change again. Like it's, it's nice that everything is, is un unraveled. And you have, you, you, you, you don't have those service boundaries which you had before. You can do all the horizontal scaling, you can do all the automatic scaling, all those things that they're super nice. But at the same time, this complexity, which used to be nicely compartmentalized, was deliberately broken up. And so it's becoming a lot harder to, to, like, we, we need to find new ways to compartmentalize this complexity back to, to human understandable levels again, in particular, as we keep onboarding new and new and new, new people, of course it's just not good use of anyone's time to, to just like learn the basics again and again and again. This is something which should be just compartmentalized and automated away. We're >>The three, We were talking to Matt Klein earlier and he was talking about as projects become mature and all over the place and have reach and and usage, you gotta work on the boring stuff. Yes. And when it's boring, that means you have success. Yes. But then you gotta work on the plumbing. What are some of the things that you guys are working on? Because people are relying on the product. >>Oh yeah. So for with my premises head on, the highlight feature is exponential or native or spars. Histograms. There's like three different names for one single concept. If you know Prometheus, you ha you currently have hard bucket boundaries where I say my latency is lower equal two seconds, one second, a hundred milliseconds, what have you. And I can put stuff into those histogram buckets accordingly to those predefined levels, which is extremely efficient, but like on the, on the code level. But it's not very nice for the humans course you need to understand your system before you're able to, to, to choose good cutoff points. And if you, if you, if you add new ones, that's completely fine. But if you want to actually change them, course you, you figured out that you made a fundamental mistake, you're going to have a break in the continue continuity of your observability data. And you cannot undo this in, into the past. So this is just gone native histograms. On the other hand, allow me to, to, okay, I'm not going to get get into the math, but basically you define a single formula, which there comes a good default. If you have good reasons, then you can change it. But if you don't, just don't talk, >>The people are in the math, Hit him up on Twitter. Twitter, h you'll get you that math. >>So the, >>The thing is people want the math, believe me. >>Oh >>Yeah. I mean we don't have time, but hit him up. Yeah. >>There's ProCon in two weeks in Munich and there will be whole talk about like the, the dirty details of all of the stuff. But the, the high level answer is it just does what people would expect it to do. And with very little overhead, you become, you get highly, highly or high resolution histograms, which is really important for a lot of use cases. But this is not just Prometheus with my open metrics head on the 2.0 feature, like the breaking highlight feature of Open Metrics 2.0 will be you guested precisely the same with my open telemetry head on. Low and behold the same underlying technology is being put or has been put into open telemetry. And we've worked for month and month and month and even longer between all different projects to, to assert that we have one single standard which is actually compatible with each other course. One of the worst things which you can have in the cloud ecosystem is if you have soly different things and they break in subtly wrong ways, like it's much better to just not work than to break in a way, which is just a little bit wrong. Of course you won't figure this out until it's too late. So we spent, like with all three hats, we spent insane amounts of time on making this happen and, and making this nice. >>Savannah, one of the things we have so much going on at Cube Con. I mean just you're unpacking like probably another day of cube. We can't go four days, but open time. >>I know, I know. I'm the same >>Open telemetry >>Challenge acceptance open. >>Sorry, we're gonna stay here. All the, They >>Shut the lights off on us last night. >>They literally gonna pull the plug on us. Yeah, yeah, yeah, yeah. They've done that before. It's not the first time we go until they kick us out. We love, love doing this. But Open telemetry is got a lot of news too. So that's, We haven't really talked much about that. >>We haven't at >>All. So there's a lot of stuff going on that, I won't call it boring. That's like code word's. That's cube talk for, for it's working. Yeah. So it's not bad, but there's a lot of stuff going on. Like open telemetry, open metrics, This is the stuff that matters cuz when you go in large scale, that's key. It's just what, missing all the, all the stuff. >>No, >>What are we missing? What are people missing? What's going on in the show that you think that's not actually being reported on? I mean it's a lot of high web assembly for instance got a lot >>Of high. Oh yeah, I was gonna say, I'm glad you're asking this because you, you've already mentioned about seven different hats that you wear. I can only imagine how many hats are actually in your hat cabinet. But you, you are someone with your, with your fingers in a lot of different things. So you can kind of give us a state of the union. Yeah. So go ahead. Let's talk about >>It. So I think you already hit a few good points. Ease of use is definitely one of them. And, and improving the developer experience and not having this like a value of pain. Yeah. That is one of the really big ones. It's going to be interesting cause it is boring. It is janitorial and it needs a different type of persona. A lot of, or maybe not most, but a large fraction of developers like the shiny stuff. And we could see this in Prometheus where like initially the people who contributed this the most where like those restless people who need to fix that one thing, this is impossible, are going to do it. Which changed over the years where the people who now contribute the most are off the janitorial. Like keep things boring, keep things running, still have substantial changes. But but not like more on the maintenance level. >>Yeah. The maintainers. I was just gonna bring that >>Up. Yeah. On the, on the keep things boring while still pushing 'em forward. Yeah. And the thing about ease of use is a lot of this is boring. A lot of this is strategy. A lot of this is toil. A lot of this takes lots of research also in areas where developers are not really good at, like UX for example, and ui like most software developers are really bad at those cause they just think differently from normal humans, I guess. >>So that's an interesting observation that you just made. I we could unpack that on a whole nother show as well. >>So the, the thing is this is going to be interesting for the open source scene course. This needs deliberate investment by companies who assign people to those projects and say, okay, fix that one thing or make it easier to use what have you. That is a lot easier with, with first party products and projects from companies cuz they can invest directly into the thing and they see much more of a value prop. It's, it's kind of normal by now to, to allow developers or even assigned developers onto open source projects. That's not so much the case for the tpms, for the architects, for the UX and your I people like for the documentation people that there's not as much awareness of that this is also driving value for everyone. Yes. And also there's not much as much. >>Yeah, that's a great point. This whole workflow production system of open source, which has grown and keeps growing and we'll keep growing. These be funded. And one of the things we were talking earlier in another session about is about the recession potentially we're hitting and the global issues, macroeconomics that might force some of these projects or companies not to get VC >>Funding. It's such a theme at the show. So, >>So to me, I said it's just not about VC funding. There's other funding mechanisms that's community oriented. There's companies participating, there's other meccas. Richie, if you could have your wishlist of how things could progress an open source, what would you want to see happen in terms of how it's, how things are funded, how things are executed. Cuz developers are going to run businesses. Cuz ultimately if you follow digital transformation to completion, it and developers aren't a department serving the business. They are the business. And that's coming fast. You know, what has to happen in your opinion, if you had the wish magic wand, what would you, what would you snap your fingers to make happen? >>If I had a magic wand that's very different from, from what is achievable. But let, let's >>Go with, Okay, go with the magic wand first. Cause we'll, we'll, we'll we'll riff on that. So >>I'm here for dreams. Yeah, yeah, >>Yeah. I mean I, I've been in open source for more than two, two decades, but now, and most of the open source is being driven forward by people who are not being paid for those. So for example, Gana is the first time I'm actually paid by a company to do my com community work. It's always been on the side. Of course I believe in it and I like doing it. I'm also not bad at it. And so I just kept doing it. But it was like at night on the weekends and everything. And to be honest, it's still at night and in the weekends, but the majority of it is during paid company time, which is awesome. Yeah. Most of the people who have driven this space forward are not in this position. They're doing it at night, they're doing it on the weekends. They're doing it out of dedication to a cause. Yeah. >>The commitment is insane. >>Yeah. At the same time you have companies mostly hyperscalers and either they have really big cloud offerings or they have really big advertisement business or both. And they're extracting a huge amount of value, which has been created in large part elsewhere. Like yes, they employ a ton of developers, but a lot of the technologies they built on and the shoulders of the giants they stand upon it are really poorly paid. And there are some efforts to like, I think the core foundation like which redistribute a little bit of money and such. But if I had my magic wand, everyone who is an open source and actually drives things forwards, get, I don't know, 20% of the value which they create just magically somehow. Yeah. >>Or, or other companies don't extract as much value and, and redistribute more like put more full-time engineers onto projects or whichever, like that would be the ideal state where the people who actually make the thing out of dedication are not more or less left on the sideline. Of course they're too dedicated to just say, Okay, I'm, I'm not doing this anymore. You figure this stuff out and let things tremble and falter. So I mean, it's like with nurses and such who, who just like, they, they know they have something which is important and they keep doing it. Of course they believe in it. >>I think this, I think this is an opportunity to start messaging this narrative because yeah, absolutely. Now we're at an inflection point where there's a big community, there is a shared responsibility in my opinion, to not spread the wealth, but make sure that it's equally balanced and, and the, and I think there's a way to do that. I don't know how yet, but I see that more than ever, it's not just come in, raid the kingdom, steal all the jewels, monetize it, and throw some token token money around. >>Well, in the burnout. Yeah, I mean I, the other thing that I'm thinking about too is it's, you know, it's, it's the, it's the financial aspect of this. It's the cognitive load. And I'm curious actually, when I ask you this question, how do you avoid burnout? You do a million different things and we're, you know, I'm sure the open source community that passion the >>Coach. Yeah. So it's just write code, >>It's, oh, my, my, my software engineering days are firmly over. I'm, I'm, I'm like, I'm the cat herer and the janitor and like this type of thing. I, I don't really write code anymore. >>It's how do you avoid burnout? >>So a i I didn't curse ahead burnout a few years ago. I was not nice, but that was still when I had like a full day job and that day job was super intense and on top I did all the things. Part of being honest, a lot of the people who do this are really dedicated and are really bad at setting boundaries between work >>And process. That's why I bring it up. Yeah. Literally why I bring it up. Yeah. >>I I I'm firmly in that area and I'm, I'm, I don't claim I have this fully figured out yet. It's also even more risky to some extent per like, it's, it's good if you're paid for this and you can do it during your work time. But on the other hand, if it's so nice and like if your hobby and your job are almost completely intersectional, it >>Becomes really, the lines are blurry. >>Yeah. And then yeah, like have work from home. You, you don't even commute anything or anymore. You just sit down at your computer and you just have fun doing your stuff and all of a sudden it's deep at night and you're still like, I want to keep going. >>Sounds like God, something cute. I >>Know. I was gonna say, I was like, passion is something we all have in common here on this. >>That's the key. That is the key point There is a, the, the passion project becomes the job. But now the contribution is interesting because now yeah, this ecosystem is, is has a commercial aspect. Again, this is the, this is the balance between commercialization and keeping that organic production system that's called open source. I mean, it's so fascinating and this is amazing. I want to continue that conversation. It's >>Awesome. Yeah. Yeah. This is, this is great. Richard, this entire conversation has been excellent. Thank you so much for joining us. How can people find you? I mean, I give em your Twitter handle, but if they wanna find out more about Grafana Prometheus and the 1700 things you do >>For grafana grafana.com, for Prometheus, promeus.io for my own stuff, GitHub slash richie age slash talks. Of course I track all my talks in there and like, I don't, I currently don't have a personal website cause I stop bothering, but my, like that repository is, is very, you find what I do over, like for example, the recording link will be uploaded to this GitHub. >>Yeah. Great. Follow. You also run a lot of events and a lot of community activity. Congratulations for you. Also, I talked about this last time, the largest IRC network on earth. You ran, built a data center from scratch. What happened? You done >>That? >>Haven't done a, he even built a cloud hyperscale compete with Amazon. That's the next one. Why don't you put that on the >>Plate? We'll be sure to feature whatever Richie does next year on the cube. >>I'm game. Yeah. >>Fantastic. On that note, Richie, again, thank you so much for being here, John, always a pleasure. Thank you. And thank you for tuning in to us here live from Detroit, Michigan on the cube. My name is Savannah Peterson and here's to hoping that you find balance in your life this weekend.
SUMMARY :
We've done over 30, but this conversation is gonna be extra special, don't you think, We're getting the conversation of what's going on in the industry management, Richie, thank you so much for joining us. I mean, I, I, I run forme day. You, you have your hands in a lot. You have to write dashboards, you have to write alerts, you have to write everything to just get started with Like 60% of the people here are first time attendees. And I love that you, you had those numbers. So I mean, I covid changed a few things. I mean, you know, casually, it's like such a gentle way of putting that, I love it, I expect this to take up again. Some of the momentum, I mean, from the project level, Most of this is online anyway. So the projects are arguably spearheading this, I feel like you got something you're saying to say, Johnny. it's almost all corners of the world. You can do all the horizontal scaling, you can do all the automatic scaling, all those things that they're super nice. What are some of the things that you But it's not very nice for the humans course you need The people are in the math, Hit him up on Twitter. Yeah. One of the worst things which you can have in the cloud ecosystem is if you have soly different things and Savannah, one of the things we have so much going on at Cube Con. I'm the same All the, They It's not the first time we go until they Like open telemetry, open metrics, This is the stuff that matters cuz when you go in large scale, So you can kind of give us a state of the union. And, and improving the developer experience and not having this like a I was just gonna bring that the thing about ease of use is a lot of this is boring. So that's an interesting observation that you just made. So the, the thing is this is going to be interesting for the open source scene course. And one of the things we were talking earlier in So, Richie, if you could have your wishlist of how things could But let, let's So Yeah, yeah, Gana is the first time I'm actually paid by a company to do my com community work. shoulders of the giants they stand upon it are really poorly paid. are not more or less left on the sideline. I think this, I think this is an opportunity to start messaging this narrative because yeah, Yeah, I mean I, the other thing that I'm thinking about too is it's, you know, I'm, I'm like, I'm the cat herer and the janitor and like this type of thing. a lot of the people who do this are really dedicated and are really Yeah. I I I'm firmly in that area and I'm, I'm, I don't claim I have this fully You, you don't even commute anything or anymore. I That is the key point There is a, the, the passion project becomes the job. things you do like that repository is, is very, you find what I do over, like for example, the recording link will be uploaded Also, I talked about this last time, the largest IRC network on earth. That's the next one. We'll be sure to feature whatever Richie does next year on the cube. Yeah. My name is Savannah Peterson and here's to hoping that you find balance in your life this weekend.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richie Hartman | PERSON | 0.99+ |
Richie | PERSON | 0.99+ |
Matt Klein | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Richard Hartmann | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Grafana Labs | ORGANIZATION | 0.99+ |
Prometheus | TITLE | 0.99+ |
Rich Richie | PERSON | 0.99+ |
60% | QUANTITY | 0.99+ |
Griffon Labs | ORGANIZATION | 0.99+ |
two seconds | QUANTITY | 0.99+ |
one second | QUANTITY | 0.99+ |
Munich | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
10 tools | QUANTITY | 0.99+ |
Detroit | LOCATION | 0.99+ |
Monday | DATE | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
Grafana | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Grafana Prometheus | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
five k | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
two | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
two decades | QUANTITY | 0.98+ |
one single concept | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
2019 | DATE | 0.98+ |
Grafana cloud | TITLE | 0.98+ |
One | QUANTITY | 0.97+ |
last night | DATE | 0.97+ |
Savannah | PERSON | 0.97+ |
ORGANIZATION | 0.96+ | |
earth | LOCATION | 0.96+ |
four days | QUANTITY | 0.96+ |
over 30 | QUANTITY | 0.95+ |
Johnny | PERSON | 0.95+ |
one click | QUANTITY | 0.95+ |
Grafana Cloud | TITLE | 0.95+ |
CloudNativeCon | EVENT | 0.94+ |
few hours ago | DATE | 0.93+ |
2.0 | OTHER | 0.93+ |
Griffon | ORGANIZATION | 0.93+ |
hundred percent | QUANTITY | 0.92+ |
two weeks | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.91+ |
grafana grafana.com | OTHER | 0.9+ |
more than two | QUANTITY | 0.89+ |
three different names | QUANTITY | 0.88+ |
two largest | QUANTITY | 0.88+ |
promeus.io | OTHER | 0.86+ |
a hundred milliseconds | QUANTITY | 0.86+ |
few years ago | DATE | 0.86+ |
single formula | QUANTITY | 0.85+ |
first | QUANTITY | 0.83+ |
Con. | EVENT | 0.83+ |
IRC | ORGANIZATION | 0.82+ |
Kubernetes | TITLE | 0.81+ |
seven different hats | QUANTITY | 0.8+ |
one single standard | QUANTITY | 0.79+ |
Valencia Andro | ORGANIZATION | 0.79+ |
NA 2022 | EVENT | 0.77+ |
Open Metrics 2.0 | OTHER | 0.74+ |
KubeCon + | EVENT | 0.7+ |
Matt LeBlanc & Tom Leyden, Kasten by Veeam | VMware Explore 2022
(upbeat music) >> Hey everyone and welcome back to The Cube. We are covering VMware Explore live in San Francisco. This is our third day of wall to wall coverage. And John Furrier is here with me, Lisa Martin. We are excited to welcome two guests from Kasten by Veeam, please welcome Tom Laden, VP of marketing and Matt LeBlanc, not Joey from friends, Matt LeBlanc, the systems engineer from North America at Kasten by Veeam. Welcome guys, great to have you. >> Thank you. >> Thank you for having us. >> Tom-- >> Great, go ahead. >> Oh, I was going to say, Tom, talk to us about some of the key challenges customers are coming to you with. >> Key challenges that they have at this point is getting up to speed with Kubernetes. So everybody has it on their list. We want to do Kubernetes, but where are they going to start? Back when VMware came on the market, I was switching from Windows to Mac and I needed to run a Windows application on my Mac and someone told me, "Run a VM." Went to the internet, I downloaded it. And in a half hour I was done. That's not how it works with Kubernetes. So that's a bit of a challenge. >> I mean, Kubernetes, Lisa, remember the early days of The Cube Open Stack was kind of transitioning, Cloud was booming and then Kubernetes was the paper that became the thing that pulled everybody together. It's now de facto in my mind. So that's clear, but there's a lot of different versions of it and you hear VMware, they call it the dial tone. Usually, remember, Pat Gelter, it's a dial tone. Turns out that came from Kit Colbert or no, I think AJ kind of coined the term here, but it's since been there, it's been adopted by everyone. There's different versions. It's open source. AWS is involved. How do you guys look at the relationship with Kubernetes here and VMware Explore with Kubernetes and the customers because they have choices. They can go do it on their own. They can add a little bit with Lambda, Serverless. They can do more here. It's not easy. It's not as easy as people think it is. And then this is a skill gaps problem too. We're seeing a lot of these problems out there. What's your take? >> I'll let Matt talk to that. But what I want to say first is this is also the power of the cloud native ecosystem. The days are gone where companies were selecting one enterprise application and they were building their stack with that. Today they're building applications using dozens, if not hundreds of different components from different vendors or open source platforms. And that is really what creates opportunities for those cloud native developers. So maybe you want to... >> Yeah, we're seeing a lot of hybrid solutions out there. So it's not just choosing one vendor, AKS, EKS, or Tanzu. We're seeing all the above. I had a call this morning with a large healthcare provider and they have a hundred clusters and that's spread across AKS, EKS and GKE. So it is covering everything. Plus the need to have a on-prem solution manage it all. >> I got a stat, I got to share that I want to get your reactions and you can laugh or comment, whatever you want to say. Talk to big CSO, CXO, executive, big company, I won't say the name. We got a thousand developers, a hundred of them have heard of Kubernetes, okay. 10 have touched it and used it and one's good at it. And so his point is that there's a lot of Kubernetes need that people are getting aware. So it shows that there's more and more adoption around. You see a lot of managed services out there. So it's clear it's happening and I'm over exaggerating the ratio probably. But the point is the numbers kind of make sense as a thousand developers. You start to see people getting adoption to it. They're aware of the value, but being good at it is what we're hearing is one of those things. Can you guys share your reaction to that? Is that, I mean, it's hyperbole at some level, but it does point to the fact of adoption trends. You got to get good at it, you got to know how to use it. >> It's very accurate, actually. It's what we're seeing in the market. We've been doing some research of our own, and we have some interesting numbers that we're going to be sharing soon. Analysts don't have a whole lot of numbers these days. So where we're trying to run our own surveys to get a grasp of the market. One simple survey or research element that I've done myself is I used Google trends. And in Google trends, if you go back to 2004 and you compare VMware against Kubernetes, you get a very interesting graph. What you're going to see is that VMware, the adoption curve is practically complete and Kubernetes is clearly taking off. And the volume of searches for Kubernetes today is almost as big as VMware. So that's a big sign that this is starting to happen. But in this process, we have to get those companies to have all of their engineers to be up to speed on Kubernetes. And that's one of the community efforts that we're helping with. We built a website called learning.kasten.io We're going to rebrand it soon at CubeCon, so stay tuned, but we're offering hands on labs there for people to actually come learn Kubernetes with us. Because for us, the faster the adoption goes, the better for our business. >> I was just going to ask you about the learning. So there's a big focus here on educating customers to help dial down the complexity and really get them, these numbers up as John was mentioning. >> And we're really breaking it down to the very beginning. So at this point we have almost 10 labs as we call them up and they start really from install a Kubernetes Cluster and people really hands on are going to install a Kubernetes Cluster. They learn to build an application. They learn obviously to back up the application in the safest way. And then there is how to tune storage, how to implement security, and we're really building it up so that people can step by step in a hands on way learn Kubernetes. >> It's interesting, this VMware Explore, their first new name change, but VMWorld prior, big community, a lot of customers, loyal customers, but they're classic and they're foundational in enterprises and let's face it. Some of 'em aren't going to rip out VMware anytime soon because the workloads are running on it. So in Broadcom we'll have some good action to maybe increase prices or whatnot. So we'll see how that goes. But the personas here are definitely going cloud native. They did with Tanzu, was a great thing. Some stuff was coming off, the fruit's coming off the tree now, you're starting to see it. CNCF has been on this for a long, long time, CubeCon's coming up in Detroit. And so that's just always been great, 'cause you had the day zero event and you got all kinds of community activity, tons of developer action. So here they're talking, let's connect to the developer. There the developers are at CubeCon. So the personas are kind of connecting or overlapping. I'd love to get your thoughts, Matt on? >> So from the personnel that we're talking to, there really is a split between the traditional IT ops and a lot of the people that are here today at VMWare Explore, but we're also talking with the SREs and the dev ops folks. What really needs to happen is we need to get a little bit more experience, some more training and we need to get these two groups to really start to coordinate and work together 'cause you're basically moving from that traditional on-prem environment to a lot of these traditional workloads and the only way to get that experience is to get your hands dirty. >> Right. >> So how would you describe the persona specifically here versus say CubeCon? IT ops? >> Very, very different, well-- >> They still go ahead. Explain. >> Well, I mean, from this perspective, this is all about VMware and everything that they have to offer. So we're dealing with a lot of administrators from that regard. On the Kubernetes side, we have site reliability engineers and their goal is exactly as their title describes. They want to architect arch applications that are very resilient and reliable and it is a different way of working. >> I was on a Twitter spaces about SREs and dev ops and there was people saying their title's called dev ops. Like, no, no, you do dev ops, you don't really, you're not the dev ops person-- >> Right, right. >> But they become the dev ops person because you're the developer running operations. So it's been weird how dev ops been co-opted as a position. >> And that is really interesting. One person told me earlier when I started Kasten, we have this new persona. It's the dev ops person. That is the person that we're going after. But then talking to a few other people who were like, "They're not falling from space." It's people who used to do other jobs who now have a more dev ops approach to what they're doing. It's not a new-- >> And then the SRE conversation was in site, reliable engineer comes from Google, from one person managing multiple clusters to how that's evolved into being the dev ops. So it's been interesting and this is really the growth of scale, the 10X developer going to more of the cloud native, which is okay, you got to run ops and make the developer go faster. If you look at the stuff we've been covering on The Cube, the trends have been cloud native developers, which I call dev ops like developers. They want to go faster. They want self-service and they don't want to slow down. They don't want to deal with BS, which is go checking security code, wait for the ops team to do something. So data and security seem to be the new ops. Not so much IT ops 'cause that's now cloud. So how do you guys see that in, because Kubernetes is rationalizing this, certainly on the compute side, not so much on storage yet but it seems to be making things better in that grinding area between dev and these complicated ops areas like security data, where it's constantly changing. What do you think about that? >> Well there are still a lot of specialty folks in that area in regards to security operations. The whole idea is be able to script and automate as much as possible and not have to create a ticket to request a VM to be billed or an operating system or an application deployed. They're really empowered to automatically deploy those applications and keep them up. >> And that was the old dev ops role or person. That was what dev ops was called. So again, that is standard. I think at CubeCon, that is something that's expected. >> Yes. >> You would agree with that. >> Yeah. >> Okay. So now translating VM World, VMware Explore to CubeCon, what do you guys see as happening between now and then? Obviously got re:Invent right at the end in that first week of December coming. So that's going to be two major shows coming in now back to back that're going to be super interesting for this ecosystem. >> Quite frankly, if you compare the persona, maybe you have to step away from comparing the personas, but really compare the conversations that we're having. The conversations that you're having at a CubeCon are really deep dives. We will have people coming into our booth and taking 45 minutes, one hour of the time of the people who are supposed to do 10 minute demos because they're asking more and more questions 'cause they want to know every little detail, how things work. The conversations here are more like, why should I learn Kubernetes? Why should I start using Kubernetes? So it's really early day. Now, I'm not saying that in a bad way. This is really exciting 'cause when you hear CNCF say that 97% of enterprises are using Kubernetes, that's obviously that small part of their world. Those are their members. We now want to see that grow to the entire ecosystem, the larger ecosystem. >> Well, it's actually a great thing, actually. It's not a bad thing, but I will counter that by saying I am hearing the conversation here, you guys'll like this on the Veeam side, the other side of the Veeam, there's deep dives on ransomware and air gap and configuration errors on backup and recovery and it's all about Veeam on the other side. Those are the guys here talking deep dive on, making sure that they don't get screwed up on ransomware, not Kubernete, but they're going to Kub, but they're now leaning into Kubernetes. They're crossing into the new era because that's the apps'll end up writing the code for that. >> So the funny part is all of those concepts, ransomware and recovery, they're all, there are similar concepts in the world of Kubernetes and both on the Veeam side as well as the Kasten side, we are supporting a lot of those air gap solutions and providing a ransomware recovery solution and from a air gap perspective, there are a many use cases where you do need to live. It's not just the government entity, but we have customers that are cruise lines in Europe, for example, and they're disconnected. So they need to live in that disconnected world or military as well. >> Well, let's talk about the adoption of customers. I mean this is the customer side. What's accelerating their, what's the conversation with the customer at base, not just here but in the industry with Kubernetes, how would you guys categorize that? And how does that get accelerated? What's the customer situation? >> A big drive to Kubernetes is really about the automation, self-service and reliability. We're seeing the drive to and reduction of resources, being able to do more with less, right? This is ongoing the way it's always been. But I was talking to a large university in Western Canada and they're a huge Veeam customer worth 7000 VMs and three months ago, they said, "Over the next few years, we plan on moving all those workloads to Kubernetes." And the reason for it is really to reduce their workload, both from administration side, cost perspective as well as on-prem resources as well. So there's a lot of good business reasons to do that in addition to the technical reliability concerns. >> So what is those specific reasons? This is where now you start to see the rubber hit the road on acceleration. >> So I would say scale and flexibility that ecosystem, that opportunity to choose any application from that or any tool from that cloud native ecosystem is a big driver. I wanted to add to the adoption. Another area where I see a lot of interest is everything AI, machine learning. One example is also a customer coming from Veeam. We're seeing a lot of that and that's a great thing. It's an AI company that is doing software for automated driving. They decided that VMs alone were not going to be good enough for all of their workloads. And then for select workloads, the more scalable one where scalability was more of a topic, would move to Kubernetes. I think at this point they have like 20% of their workloads on Kubernetes and they're not planning to do away with VMs. VMs are always going to be there just like mainframes still exist. >> Yeah, oh yeah. They're accelerating actually. >> We're projecting over the next few years that we're going to go to a 50/50 and eventually lean towards more Kubernetes than VMs, but it was going to be a mix. >> Do you have a favorite customer example, Tom, that you think really articulates the value of what Kubernetes can deliver to customers where you guys are really coming in and help to demystify it? >> I would think SuperStereo is a really great example and you know the details about it. >> I love the SuperStereo story. They were a AWS customer and they're running OpenShift version three and they need to move to OpenShift version four. There is no upgrade in place. You have to migrate all your apps. Now SuperStereo is a large French IT firm. They have over 700 developers in their environment and it was by their estimation that this was going to take a few months to get that migration done. We're able to go in there and help them with the automation of that migration and Kasten was able to help them architect that migration and we did it in the course of a weekend with two people. >> A weekend? >> A weekend. >> That's a hackathon. I mean, that's not real come on. >> Compared to thousands of man hours and a few months not to mention since they were able to retire that old OpenShift cluster, the OpenShift three, they were able to stop paying Jeff Bezos for a couple of those months, which is tens of thousands of dollars per month. >> Don't tell anyone, keep that down low. You're going to get shot when you leave this place. No, seriously. This is why I think the multi-cloud hybrid is interesting because these kinds of examples are going to be more than less coming down the road. You're going to see, you're going to hear more of these stories than not hear them because what containerization now Kubernetes doing, what Dockers doing now and the role of containers not being such a land grab is allowing Kubernetes to be more versatile in its approach. So I got to ask you, you can almost apply that concept to agility, to other scenarios like spanning data across clouds. >> Yes, and that is what we're seeing. So the call I had this morning with a large insurance provider, you may have that insurance provider, healthcare provider, they're across three of the major hyperscalers clouds and they do that for reliability. Last year, AWS went down, I think three times in Q4 and to have a plan of being able to recover somewhere else, you can actually plan your, it's DR, it's a planned migration. You can do that in a few hours. >> It's interesting, just the sidebar here for a second. We had a couple chats earlier today. We had the influences on and all the super cloud conversations and trying to get more data to share with the audience across multiple areas. One of them was Amazon and that super, the hyper clouds like Amazon, as your Google and the rest are out there, Oracle, IBM and everyone else. There's almost a consensus that maybe there's time for some peace amongst the cloud vendors. Like, "Hey, you've already won." (Tom laughs) Everyone's won, now let's just like, we know where everyone is. Let's go peace time and everyone, then 'cause the relationship's not going to change between public cloud and the new world. So there's a consensus, like what does peace look like? I mean, first of all, the pie's getting bigger. You're seeing ecosystems forming around all the big new areas and that's good thing. That's the tides rise and the pie's getting bigger, there's bigger market out there now so people can share and share. >> I've never worked for any of these big players. So I would have to agree with you, but peace would not drive innovation. And in my heart is with tech innovation. I love it when vendors come up with new solutions that will make things better for customers and if that means that we're moving from on-prem to cloud and back to on-prem, I'm fine with that. >> What excites me is really having the flexibility of being able to choose any provider you want because you do have open standards, being cloud native in the world of Kubernetes. I've recently discovered that the Canadian federal government had mandated to their financial institutions that, "Yes, you may have started all of your on cloud presence in Azure, you need to have an option to be elsewhere." So it's not like-- >> Well, the sovereign cloud is one of those big initiatives, but also going back to Java, we heard another guest earlier, we were thinking about Java, right once ran anywhere, right? So you can't do that today in a cloud, but now with containers-- >> You can. >> Again, this is, again, this is the point that's happening. Explain. >> So when you have, Kubernetes is a strict standard and all of the applications are written to that. So whether you are deploying MongoDB or Postgres or Cassandra or any of the other cloud native apps, you can deploy them pretty much the same, whether they're in AKS, EKS or on Tanzu and it makes it much easier. The world became just a lot less for proprietary. >> So that's the story that everybody wants to hear. How does that happen in a way that is, doesn't stall the innovation and the developer growth 'cause the developers are driving a lot of change. I mean, for all the talk in the industry, the developers are doing pretty good right now. They've got a lot of open source, plentiful, open source growing like crazy. You got shifting left in the CICD pipeline. You got tools coming out with Kubernetes. Infrastructure has code is almost a 100% reality right now. So there's a lot of good things going on for developers. That's not an issue. The issue is just underneath. >> It's a skillset and that is really one of the biggest challenges I see in our deployments is a lack of experience. And it's not everyone. There are some folks that have been playing around for the last couple of years with it and they do have that experience, but there are many people that are still young at this. >> Okay, let's do, as we wrap up, let's do a lead into CubeCon, it's coming up and obviously re:Invent's right behind it. Lisa, we're going to have a lot of pre CubeCon interviews. We'll interview all the committee chairs, program chairs. We'll get the scoop on that, we do that every year. But while we got you guys here, let's do a little pre-pre-preview of CubeCon. What can we expect? What do you guys think is going to happen this year? What does CubeCon look? You guys our big sponsor of CubeCon. You guys do a great job there. Thanks for doing that. The community really recognizes that. But as Kubernetes comes in now for this year, you're looking at probably the what third year now that I would say Kubernetes has been on the front burner, where do you see it on the hockey stick growth? Have we kicked the curve yet? What's going to be the level of intensity for Kubernetes this year? How's that going to impact CubeCon in a way that people may or may not think it will? >> So I think first of all, CubeCon is going to be back at the level where it was before the pandemic, because the show, as many other shows, has been suffering from, I mean, virtual events are not like the in-person events. CubeCon LA was super exciting for all the vendors last year, but the attendees were not really there yet. Valencia was a huge bump already and I think Detroit, it's a very exciting city I heard. So it's going to be a blast and it's going to be a huge attendance, that's what I'm expecting. Second I can, so this is going to be my third personally, in-person CubeCon, comparing how vendors evolved between the previous two. There's going to be a lot of interesting stories from vendors, a lot of new innovation coming onto the market. And I think the conversations that we're going to be having will yet, again, be much more about live applications and people using Kubernetes in production rather than those at the first in-person CubeCon for me in LA where it was a lot about learning still, we're going to continue to help people learn 'cause it's really important for us but the exciting part about CubeCon is you're talking to people who are using Kubernetes in production and that's really cool. >> And users contributing projects too. >> Also. >> I mean Lyft is a poster child there and you've got a lot more. Of course you got the stealth recruiting going on there, Apple, all the big guys are there. They have a booth and no one's attending you like, "Oh come on." Matt, what's your take on CubeCon? Going in, what do you see? And obviously a lot of dynamic new projects. >> I'm going to see much, much deeper tech conversations. As experience increases, the more you learn, the more you realize you have to learn more. >> And the sharing's going to increase too. >> And the sharing, yeah. So I see a lot of deep conversations. It's no longer the, "Why do I need Kubernetes?" It's more, "How do I architect this for my solution or for my environment?" And yeah, I think there's a lot more depth involved and the size of CubeCon is going to be much larger than we've seen in the past. >> And to finish off what I think from the vendor's point of view, what we're going to see is a lot of applications that will be a lot more enterprise-ready because that is the part that was missing so far. It was a lot about the what's new and enabling Kubernetes. But now that adoption is going up, a lot of features for different components still need to be added to have them enterprise-ready. >> And what can the audience expect from you guys at CubeCon? Any teasers you can give us from a marketing perspective? >> Yes. We have a rebranding sitting ready for learning website. It's going to be bigger and better. So we're not no longer going to call it, learning.kasten.io but I'll be happy to come back with you guys and present a new name at CubeCon. >> All right. >> All right. That sounds like a deal. Guys, thank you so much for joining John and me breaking down all things Kubernetes, talking about customer adoption, the challenges, but also what you're doing to demystify it. We appreciate your insights and your time. >> Thank you so much. >> Thank you very much. >> Our pleasure. >> Thanks Matt. >> For our guests and John Furrier, I'm Lisa Martin. You've been watching The Cube's live coverage of VMware Explore 2022. Thanks for joining us. Stay safe. (gentle music)
SUMMARY :
We are excited to welcome two customers are coming to you with. and I needed to run a and you hear VMware, they the cloud native ecosystem. Plus the need to have a They're aware of the value, And that's one of the community efforts to help dial down the And then there is how to tune storage, So the personas are kind of and a lot of the people They still go ahead. and everything that they have to offer. the dev ops person-- So it's been weird how dev ops That is the person that we're going after. the 10X developer going to and not have to create a ticket So again, that is standard. So that's going to be two of the people who are but they're going to Kub, and both on the Veeam side not just here but in the We're seeing the drive to to see the rubber hit the road that opportunity to choose any application They're accelerating actually. over the next few years and you know the details about it. and they need to move to I mean, that's not real come on. and a few months not to mention since and the role of containers and to have a plan of being and that super, the and back to on-prem, I'm fine with that. that the Canadian federal government this is the point that's happening. and all of the applications and the developer growth and that is really one of How's that going to impact and it's going to be a huge attendance, and no one's attending you like, the more you learn, And the sharing's and the size of CubeCon because that is the part It's going to be bigger and better. adoption, the challenges, of VMware Explore 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt LeBlanc | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Pat Gelter | PERSON | 0.99+ |
Tom Leyden | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Tom Laden | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one hour | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
LA | LOCATION | 0.99+ |
Detroit | LOCATION | 0.99+ |
Joey | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
10 minute | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
Western Canada | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
7000 VMs | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
97% | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Second | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
20% | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
two groups | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
Windows | TITLE | 0.99+ |
third day | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
over 700 developers | QUANTITY | 0.99+ |
learning.kasten.io | OTHER | 0.98+ |
AKS | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Veeam | PERSON | 0.98+ |
VMware Explore 2022 | TITLE | 0.98+ |
VMWare Explore | ORGANIZATION | 0.98+ |
CubeCon | EVENT | 0.98+ |
One example | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
three months ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
EKS | ORGANIZATION | 0.97+ |
Lyft | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
Kasten | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
three times | QUANTITY | 0.97+ |
SuperStereo | TITLE | 0.97+ |
third year | QUANTITY | 0.96+ |
AWS Heroes Panel | Open Cloud Innovations
(upbeat music) >> Hello, and welcome back to AWS Startup Showcase, I'm John Furrier, your host. This is the Hero panel, the AWS Heroes. These are folks that have a lot of experience in Open Source, having fun building great projects and commercializing the value and best practices of Open Source innovation. We've got some great guests here. Liz Rice, Chief Open Source Officer, Isovalent. CUBE alumni, great to see you. Brian LeRoux, who is the Co-founder and CTO of begin.com. Erica Windisch who's an Architect for Developer Experience. AWS Hero, also CUBE alumni. Casey Lee, CTO Gaggle. Doing some great stuff in ed tech. Great collection of experts and experienced folks doing some fun stuff, welcome to this conversation this CUBE panel. >> Hi. >> Thanks for having us. >> Hello. >> Let's go down the line. >> I don't normally do this, but since we're remote and we have such great guests, go down the line and talk about why Open Source is important to you guys. What projects are you currently working on? And what's the coolest thing going on there? Liz we'll start with you. >> Okay, so I am very involved in the world of Cloud Native. I'm the chair of the technical oversight committee for the Cloud Native Computing Foundation. So that means I get to see a lot of what's going on across a very broad range of Cloud Native projects. More specifically, Isovalent. I focus on Cilium, which is it's based on a technology called EBPF. That is to me, probably the most exciting technology right now. And then finally, I'm also involved in an organization called OpenUK, which is really pushing for more use of open technologies here in the United Kingdom. So spread around lots of different projects. And I'm in a really fortunate position, I think, to see what's happening with lots of projects and also the commercialization of lots of projects. >> Awesome, Brian what project are you working on? >> Working project these days called Architect. It's a Open Source project built on top of AWSM. It adds a lot of sugar and terseness to the SM experience and just makes it a lot easier to work with and get started. AWS can be a little bit intimidating to people at times. And the Open Source community is stepping up to make some of that bond ramp a little bit easier. And I'm also an Apache member. And so I keep a hairy eyeball on what's going on in that reality all the time. And I've been doing this open-source thing for quite a while, and yeah, I love it. It's a great thing. It's real science. We get to verify each other's work and we get to expand and build on human knowledge. So that's a huge honor to just even be able to do that and I feel stoked to be here so thanks for having me. >> Awesome, yeah, and totally great. Erica, what's your current situation going on here? What's happening? >> Sure, so I am currently working on developer experience of a number of Open Source STKS and CLI components from my current employer. And previously, recently I left New Relic where I was working on integrating with OpenTelemetry, as well as a number of other things. Before that I was a maintainer of Docker and of OpenStack. So I've been in this game for a while as well. And I tend to just put my fingers in a lot of little pies anywhere from DVD players 20 years ago to a lot of this open telemetry and monitoring and various STKs and developer tools is where like Docker and OpenStack and the STKs that I work on now, all very much focusing on developer as the user. >> Yeah, you're always on the wave, Erica great stuff. Casey, what's going on? Do you got some great ed techs happening? What's happening with you? >> Yeah, sure. The primary Open Source project that I'm contributing to right now is ACT. This is a tool I created a couple of years back when GitHub Actions first came out, and my motivation there was I'm just impatient. And that whole commit, push, wait time where you're testing out your pipelines is painful. And so I wanted to build a tool that allowed developers to test out their GitHub Actions workflows locally. And so this tool uses Docker containers to emulate, to get up action environment and gives you fast feedback on those workflows that you're building. Lot of innovation happening at GitHub. And so we're just trying to keep up and continue to replicate those new features functionalities in the local runner. And the biggest challenge I've had with this project is just keeping up with the community. We just passed 20,000 stars, and it'd be it's a normal week to get like 10 PRs. So super excited to announce just yesterday, actually I invited four of the most active contributors to help me with maintaining the project. And so this is like a big deal for me, letting the project go and bringing other people in to help lead it. So, yeah, huge shout out to those folks that have been helping with driving that project. So looking forward to what's next for it. >> Great, we'll make sure the SiliconANGLE riders catch that quote there. Great call out. Let's start, Brian, you made me realize when you mentioned Apache and then you've been watching all the stuff going on, it brings up the question of the evolution of Open Source, and the commercialization trends have been very interesting these days. You're seeing CloudScale really impact also with the growth of code. And Liz, if you remember, the Linux Foundation keeps making projections and they keep blowing past them every year on more and more code and more and more entrance coming in, not just individuals, corporations. So you starting to see Netflix donates something, you got Lyft donate some stuff, becomes a project company forms around it. There's a lot of entrepreneurial activity that's creating this new abstraction layers, new platforms, not just tools. So you start to see a new kickup trajectory with Open Source. You guys want to comment on this because this is going to impact how fast the enterprise will see value here. >> I think a really great example of that is a project called Backstage that's just come out of Spotify. And it's going through the incubation process at the CNCF. And that's why it's front of mind for me right now, 'cause I've been working on the due diligence for that. And the reason why I thought it was interesting in relation to your question is it's spun out of Spotify. It's fully Open Source. They have a ton of different enterprises using it as this developer portal, but they're starting to see some startups emerging offering like a hosted managed version of Backstage or offering services around Backstage or offering commercial plugins into Backstage. And I think it's really fascinating to see those ecosystems building up around a project and different ways that people can. I'm a big believer. You cannot sell the Open Source code, but you can sell other things that create value around Open Source projects. So that's really exciting to see. >> Great point. Anyone else want to weigh in and react to that? Because it's the new model. It's not the old way. I mean, I remember when I was in college, we had the Pirate software. Open Source wasn't around. So you had to deal under the table. Now it's free. But I mean the old way was you had to convince the enterprise, like you've got a hard knit, it builds the community and the community manage the quality of the code. And then you had to build the company to make sure they could support it. Now the companies are actually involved in it, right? And then new startups are forming faster. And the proof points are shorter and highly accelerated for that. I mean, it's a whole new- >> It's a Cambrian explosion, and it's great. It's one of those things that it's challenging for the new developers because they come in and they're like, "Whoa, what is all this stuff that I'm supposed to figure out?" And there's no right answer and there's no wrong answer. There's just tons of it. And I think that there's a desire for us to have one sort of well-known trot and happy path, that audience we're a lot better with a more diverse community, with lots of options, with lots of ways to approach these problems. And I think it's just great. A challenge that we have with all these options and all these Cambrian explosion of projects and all these competing ideas, right now, the sustainability, it's a bit of a tricky question to answer. We know that there's a commercialization aspect that helps us fund these projects, but how we compose the open versus the commercial source is still a bit of a tricky question and a tough one for a lot of folks. >> Erica, would you chime in on that for a second. I want to get your angle on that, this experience and all this code, and I'm a new person, I'm an existing person. Do I get like a blue check mark and verify? I mean, these are questions like, well, how do you navigate? >> Yeah, I think this has been something happening for a while. I mean, back in the early OpenStack days, 2010, for instance, Rackspace Open Sourcing, OpenStack and ANSU Labs and so forth, and then trying, having all these companies forming in creating startups around this. I started at a company called Cloudccaling back in late 2010, and we had some competitors such as Piston and so forth where a lot of the ANSUL Labs people went. But then, the real winners, I think from OpenStack ended up being the enterprises that jumped in. We had Red Hat in particular, as well as HP and IBM jumping in and investing in OpenStack, and really proving out a lot of... not that it was the first time, but this is when we started seeing billions of dollars pouring into Open Source projects and Open Source Foundations, such as the OpenStack Foundation, which proceeded a lot of the things that we now see with the Linux Foundation, which was then created a little bit later. And at the same time, I'm also reflecting a little bit what Brian said because there are projects that don't get funded, that don't get the same attention, but they're also getting used quite significantly. Things like Log4j really bringing this to the spotlight in terms of projects that are used everywhere by everything with significant outsized impacts on the industry that are not getting funded, that aren't flashy enough, that aren't exciting enough because it's just logging, but a vulnerability in it brings every everything and everybody down and has possibly billions of dollars of impact to our industry because nobody wanted to fund this project. >> I think that brings up the commercialization point about maybe bringing a venture capital model in saying, "Hey, that boring little logging thing could be a key ingredient for say solving some observability problems so I think let's put some cash." Again then we'd never seen that before. Now you're starting to see that kind of a real smart investment thesis going into Open Source projects. I mean, Promethease, Crafter, these are projects that turned off companies. This is turning up companies. >> A decade ago, there was no money in Dev tools that I think that's been fully debunked now. They used to be a concept that the venture community believed, but there's just too much evidence to the contrary, the companies like Cash Court, Datadog, the list goes on and on. I think the challenge for the Open Source (indistinct) comes back to foundations and working (indistinct) these developers make this code safe and secure. >> Casey, what's your reaction to all of this? You've got, so a project has gained some traction, got some momentum. There's a lot of mission critical. I won't say white spaces, but the opportunities in the big cloud game happening. And there's a lot of, I won't say too many entrepreneurial, but there's a lot of community action happening that's precommercialization that's getting traction. How does this all develop naturally and then vector in quickly when it hits? >> Yeah, I want to go back to the Log4j topic real quick. I think that it's a great example of an area that we need to do better at. And there was a cool article that Rob Pike wrote describing how to quantify the criticality. I think that's sort of quantifying criticality was the article he wrote on how to use metrics, to determine how valuable, how important a piece of Open Source is to the community. And we really need to highlight that more. We need a way to make it more clear how important this software is, how many people depend on it and how many people are contributing to it. And because right now we all do that. Like if I'm going to evaluate an Open Source software, sure, I'll look at how many stars it has and how many contributors it has. But I got to go through and do all that work myself and come up with. It would be really great if we had an agreed upon method for ranking the criticality of software, but then also the risk, hey, that this is used by a ton of people, but nobody's contributing to it anymore. That's a concern. And that would be great to potential users of that to signal whether or not it makes sense. The Open Source Security Foundation, just getting off the ground, they're doing some work in this space, and I'm really excited to see where they go with that looking at ways to stop score critically. >> Well, this brings up a good point while we've got everyone here, let's take a plug and plug a project you think that's not getting the visibility it needs. Let's go through each of you, point out a project that you think people should be looking at and talking about that might get some free visibility here. Anyone want to highlight projects they think should be focused more on, or that needs a little bit of love? >> I think, I mean, particularly if we're talking about these sort of vulnerability issues, there's a ton of work going on, like in the Secure Software Foundation, other foundations, I think there's work going on in Apache somewhere as well around the bill of material, the software bill of materials, the Secure Software supply chain security, even enumerating your dependencies is not trivial today. So I think there's going to be a ton of people doing really good work on that, as well as the criticality aspect. It's all like that. There's a really great xkcd cartoon with your software project and some really big monolithic lumps. And then, this tiny little piece in a very important point that's maintained by somebody in his bedroom in Montana or something and if you called it out. >> Yeah, you just opened where the next lightening and a bottle comes from. And this is I think the beauty of Open Source is that you get a little collaboration, you get three feet in a cloud of dust going and you get some momentum, and if it's relevant, it rises to the top. I think that's the collective intelligence of Open Source. The question I want to ask that the panel here is when you go into an enterprise, and now that the game is changing with a much more collaborative and involved, what's the story if they say, hey, what's in it for me, how do I manage the Open Source? What's the current best practice? Because there's no doubt I can't ignore it. It's in everything we do. How do I organize around it? How do I build around it to be more efficient and more productive and reduce the risk on vulnerabilities to managing staff, making sure the right teams in place, the right agility and all those things? >> You called it, they got to get skin in the game. They need to be active and involved and donating to a sustainable Open Source project is a great way to start. But if you really want to be active, then you should be committing. You should have a goal for your organization to be contributing back to that project. Maybe not committing code, it could be committing resources into the darks or in the tests, or even tweeting about an Open Source project is contributing to it. And I think a lot of these enterprises could benefit a lot from getting more active with the Open Source Foundations that are out there. >> Liz, you've been actively involved. I know we've talked personally when the CNCF started, which had a great commercial uptake from companies. What do you think the current state-of-the-art kind of equation is has it changed a little bit? Or is it the game still the same? >> Yeah, and in the early days of the CNCF, it was very much dominated by vendors behind the project. And now we're seeing more and more membership from end-user companies, the kind of enterprises that are building their businesses on Cloud Native, but their business is not in itself. That's not there. The infrastructure is not their business. And I think seeing those companies, putting money in, putting time in, as Brian says contributing resources quite often, there's enough money, but finding the talent to do the work and finding people who are prepared to actually chop the wood and carry the water, >> Exactly. >> that it's hard. >> And if enterprises can find peoples to spend time on Open Source projects, help with those chores, it's hugely valuable. And it's one of those the rising tide floats all the boats. We can raise security, we can reduce the amount of dependency on maintain projects collectively. >> I think the business models there, I think one of the things I'll react to and then get your guys' comments is remember which CubeCon it was, it was one of the early ones. And I remember seeing Apple having a booth, but nobody was manning. It was just an Apple booth. They weren't doing anything, but they were recruiting. And I think you saw the transition of a business model where the worry about a big vendor taking over a project and having undue influence over it goes away because I think this idea of participation is also talent, but also committing that talent back into the communities as a model, as a business model, like, okay, hire some great people, but listen, don't screw up the Open Source piece of it 'cause that's a critical. >> Also hire a channel, right? They can use those contributions to source that talent and build the reputation in the communities that they depend on. And so there's really a lot of benefit to the larger organizations that can do this. They'll have a huge pipeline of really qualified engineers right out the gate without having to resort to cheesy whiteboard interviews, which is pretty great. >> Yeah, I agree with a lot of this. One of my concerns is that a lot of these corporations tend to focus very narrowly on certain projects, which they feel that they depend greatly, they'll invest in OpenStack, they'll invest in Docker, they'll invest in some of the CNCF projects. And then these other projects get ignored. Something that I've been a proponent of for a little bit for a while is observability of your dependencies. And I don't think there's quite enough projects and solutions to this. And it sounds maybe from lists, there are some projects that I don't know about, but I also know that there's some startups like Snyk and so forth that help with a little bit of this problem, but I think we need more focus on some of these edges. And I think companies need to do better, both in providing, having some sort of solution for observability of the dependencies, as well as understanding those dependencies and managing them. I've seen companies for instance, depending on software that they actively don't want to use based on a certain criteria that they already set projects, like they'll set a requirement that any project that they use has a code of conduct, but they'll then use projects that don't have codes of conduct. And if they don't have a code of conduct, then employees are prohibited from working on those projects. So you've locked yourself into a place where you're depending on software that you have instructed, your employees are not allowed to contribute to, for certain legal and other reasons. So you need to draw a line in the sand and then recognize that those projects are ones that you don't want to consume, and then not use them, and have observability around these things. >> That's a great point. I think we have 10 minutes left. I want to just shift to a topic that I think is relevant. And that is as Open Source software, software, people develop software, you see under the hood kind of software, SREs developing very quickly in the CloudScale, but also you've got your classic software developers who were writing code. So you have supply chain, software supply chain challenges. You mentioned developer experience around how to code. You have now automation in place. So you've got the development of all these things that are happening. Like I just want to write software. Some people want to get and do infrastructure as code so DevSecOps is here. So how does that look like going forward? How has the future of Open Source going to make the developers just want to code quickly? And the folks who want to tweak the infrastructure a bit more efficient, any views on that? >> At Gaggle, we're using AWS' CDK, exclusively for our infrastructure as code. And it's a great transition for developers instead of writing Yammel or Jason, or even HCL for their infrastructure code, now they're writing code in the language that they're used to Python or JavaScript, and what that's providing is an easier transition for developers into that Infrastructure as code at Gaggle here, but it's also providing an opportunity to provide reusable constructs that some Devs can build on. So if we've got a very opinionated way to deploy a serverless app in a database and do auto-scaling behind and all stuff, we can present that to a developer as a library, and they can just consume it as it is. Maybe that's as deep as they want to go and they're happy with that. But then they want to go deeper into it, they can either use some of the lower level constructs or create PRs to the platform team to have those constructs changed to fit their needs. So it provides a nice on-ramp developers to use the tools and languages they're used to, and then also go deeper as they need. >> That's awesome. Does that mean they're not full stack developers anymore that they're half stack developers they're taking care of for them? >> I don't know either. >> We'll in. >> No, only kidding. Anyway, any other reactions to this whole? I just want to code, make it easy for me, and some people want to get down and dirty under the hood. >> So I think that for me, Docker was always a key part of this. I don't know when DevSecOps was coined exactly, but I was talking with people about it back in 2012. And when I joined Docker, it was a part of that vision for me, was that Docker was applying these security principles by default for your application. It wasn't, I mean, yes, everybody adopted because of the portability and the acceleration of development, but it was for me, the fact that it was limiting what you could do from a security angle by default, and then giving you these tuna balls that you can control it further. You asked about a project that may not get enough recognition is something called DockerSlim, which is designed to optimize your containers and will make them smaller, but it also constraints the security footprint, and we'll remove capabilities from the container. It will help you build security profiles for app armor and the Red Hat one. SELinux. >> SELinux. >> Yeah, and this is something that I think a lot of developers, it's kind of outside of the realm of things that they're really thinking about. So the more that we can automate those processes and make it easier out of the box for users or for... when I say users, I mean, developers, so that it's straightforward and automatic and also giving them the capability of refining it and tuning it as needed, or simply choosing platforms like serverless offerings, which have these security constraints built in out of the box and sometimes maybe less tuneable, but very strong by default. And I think that's a good place for us to be is where we just enforced these things and make you do things in a secure way. >> Yeah, I'm a huge fan of Kubernetes, but it's not the right hammer for every nail. And there are absolutely tons of applications that are better served by something like Lambda where a lot more of that security surface is taken care of for the developer. And I think we will see better tooling around security profiling and making it easier to shrink wrap your applications that there are plenty of products out there that can help you with this in a cloud native environment. But I think for the smaller developer let's say, or an earlier stage company, yeah, it needs to be so much more straightforward. Really does. >> Really an interesting time, 10 years ago, when I was working at Adobe, we used to requisition all these analysts to tell us how many developers there were for the market. And we thought there was about 20 million developers. If GitHub's to be believed, we think there is now around 80 million developers. So both these groups are probably wrong in their numbers, but the takeaway here for me is that we've got a lot of new developers and a lot of these new developers are really struck by a paradox of choice. And they're typically starting on the front end. And so there's a lot of movement in the stack moved towards the front end. We saw that at re:Invent when Amazon was really pushing Amplify 'cause they're seeing this too. It's interesting because this is where folks start. And so a lot of the obstructions are moving in that direction, but maybe not always necessarily totally appropriate. And so finding the right balance for folks is still a work in progress. Like Lambda is a great example. It lets me focus totally on just business logic. I don't have to think about infrastructure pretty much at all. And if I'm newer to the industry, that makes a lot of sense to me. As use cases expand, all of a sudden, reality intervenes, and it might not be appropriate for everything. And so figuring out what those edges are, is still the challenge, I think. >> All right, thank you very much for coming on the CUBE here panel. AWS Heroes, thanks everyone for coming. I really appreciate it, thank you. >> Thank you. >> Thank you. >> Okay. >> Thanks for having me. >> Okay, that's a wrap here back to the program and the awesome startups. Thanks for watching. (upbeat music)
SUMMARY :
and commercializing the value is important to you guys. and also the commercialization that reality all the time. Erica, what's your current and the STKs that I work on now, the wave, Erica great stuff. and continue to replicate those and the commercialization trends And the reason why I and the community manage that I'm supposed to figure out?" in on that for a second. that don't get the same attention, the commercialization point that the venture community believed, but the opportunities in the of that to signal whether and plug a project you think So I think there's going to be and now that the game is changing and donating to a sustainable Or is it the game still the same? but finding the talent to do the work the rising tide floats all the boats. And I think you saw the and build the reputation And I think companies need to do better, And the folks who want to in the language that they're Does that mean they're not and some people want to get and the acceleration of development, of the realm of things and making it easier to And so finding the right balance for folks for coming on the CUBE here panel. the awesome startups.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Erica Windisch | PERSON | 0.99+ |
Brian LeRoux | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Liz Rice | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Casey Lee | PERSON | 0.99+ |
Rob Pike | PERSON | 0.99+ |
Erica | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ANSU Labs | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
Montana | LOCATION | 0.99+ |
2012 | DATE | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Liz | PERSON | 0.99+ |
ANSUL Labs | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Secure Software Foundation | ORGANIZATION | 0.99+ |
Casey | PERSON | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
OpenUK | ORGANIZATION | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
United Kingdom | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
Open Source Security Foundation | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
three feet | QUANTITY | 0.99+ |
Cash Court | ORGANIZATION | 0.99+ |
Snyk | ORGANIZATION | 0.99+ |
20,000 stars | QUANTITY | 0.99+ |
JavaScript | TITLE | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cloudccaling | ORGANIZATION | 0.99+ |
Piston | ORGANIZATION | 0.99+ |
20 years ago | DATE | 0.99+ |
Lyft | ORGANIZATION | 0.98+ |
late 2010 | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
OpenStack Foundation | ORGANIZATION | 0.98+ |
Lambda | TITLE | 0.98+ |
Gaggle | ORGANIZATION | 0.98+ |
Secure Software | ORGANIZATION | 0.98+ |
around 80 million developers | QUANTITY | 0.98+ |
CNCF | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.97+ |
four | QUANTITY | 0.97+ |
Open Source Foundations | ORGANIZATION | 0.97+ |
billions of dollars | QUANTITY | 0.97+ |
New Relic | ORGANIZATION | 0.97+ |
OpenStack | ORGANIZATION | 0.97+ |
OpenStack | TITLE | 0.96+ |
DevSecOps | TITLE | 0.96+ |
first time | QUANTITY | 0.96+ |
EBPF | ORGANIZATION | 0.96+ |
about 20 million developers | QUANTITY | 0.96+ |
Open Source Foundations | ORGANIZATION | 0.95+ |
Docker | ORGANIZATION | 0.95+ |
10 PRs | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
CloudScale | TITLE | 0.94+ |
AWS Hero | ORGANIZATION | 0.94+ |
Docker | TITLE | 0.92+ |
GitHub Actions | TITLE | 0.92+ |
A decade ago | DATE | 0.92+ |
first | QUANTITY | 0.91+ |
Kim Lewandowski and Dan Lorenc, Chainguard, Inc. | KubeCon + CloudNativeCon NA 2021
>>Hello, and welcome back to the cubes coverage of coop con cloud native con 2021. We're here in person at a real event. I'm John farrier host of the cube, but Dave Nicholson, Michael has got great guests here. Two founders of brand new startup, one week old cable on ASCII and Dave Lawrence, uh, with chain guard, former Google employees, open source community members decided to start a company with five other people on total five total. Congratulations. Welcome to the cube. >>Thank you. Thank you for >>Having us. So tell us like a product, you know, we know you don't have a price. So take us through the story because this is one of those rare moments. We got great chance to chat with you guys just a week into the new forms company and the team. What's the focus, what's the vision. >>How far back do you want to go with this story >>And why you left Google? So, you know, we're a gin and tonics. We get a couple of beers I can do that. We can do that. Let's just take over the world. >>Yeah. So we both been at Google, uh, for awhile. Um, the last couple of years we've been really worried about and focused on open-source security risk and supply chain security in general and software. Um, it's been a really interesting time as you probably noticed, uh, to be in that space, but it wasn't that interesting two years ago or even a year and a half ago. Um, so we were doing a bunch of this work at Google and the open source. Nobody really understood it. People kind of looked at us funny at talks and conferences. Um, and then beginning of this year, a bunch of attacks started happening, uh, things in the headlines like solar winds, solar winds attack, like you say, it attack all these different ransomware things happening. Uh, companies and governments are getting hit with supply chain attacks. So overnight people kind of started caring and being really worried about the stuff that we've been doing for a while. So it was a pretty cool thing to be a part of. And it seemed like a good time to start a company and keep your >>Reaction to this startup. How do you honestly feel, I suppose, feeling super excited. Yeah. >>I am really excited. I was in stars before Google. So then I went to Google where there for seven, I guess, Dan, a little bit longer, but I was there for seven years on the product side. And then yeah, we, we, the open source stuff, we were really there for protecting Google and we both came from cloud before that working on enterprise product. So then sorta just saw the opportunity, you know, while these companies trying to scramble and then sort of figure out how to better secure themselves. So it seemed like a perfect, >>The start-up bug and you back in the start up, but it's the timing's perfect. I got to say, this is a big conversation supply chain from whether it's components and software now, huge attack vector, people are taking advantage of it super important. So I'm really glad you're doing it. But first explain to the folks watching what is supply chain software? What's the challenge? What is the, what is the supply chain security challenge or problem? >>Sure. Yeah, it's the metaphor of software supply chain. It's just like physical supply chain. That's where the name came from. And it, it really comes down to how the code gets from your team's keyboard, your team's fingers on those keyboards into your production environment. Um, and that's just the first level of it. Uh, cause nobody writes all of the code. They use themselves. We're here at cloud native con it's hundreds of open source vendors, hundreds of open libraries that people are reusing. So your, your trust, uh, radius and your attack radius extends to not just your own companies, your own developers, but to everyone at this conference. And then everyone that they rely on all the way out. Uh, it's quite terrifying. It's a surface, the surface area explode pretty quickly >>And people are going and the, and the targeting to, because everyone's touching the code, it's open. It's a lot of action going on. How do you solve the problem? What is the approach? What's the mindset? What's the vision on the problems solving solutions? >>Yeah, that's a great question. I mean, I think like you said, the first step is awareness. Like Dan's been laughing, he's been, he felt like a crazy guy in the corner saying, you know, stop building software underneath your desk and you know, getting companies, >>Hey, we didn't do, why don't you tell them? I was telling him for five years. >>Yeah. But, but I think one of his go-to lines was like, would you pick up a thumb drive off the side of the street and plug it into your computer? Probably not. But when you download, you know, an open source package or something, that's actually can give you more privileges and production environments and it's so it's pretty scary. Um, so I think, you know, for the last few years we've been working on a number of open source projects in this space. And so I think that's where we're going to start is we're going to look at those and then try to grow out the community. And we're, we're watching companies, even like solar winds, trying to piece these parts together, um, and really come up with a better solution for themselves. >>Are there existing community initiatives or open source efforts that are underway that you plan to participate in or you chart? Are you thinking of charting a new >>Path? >>Oh, it's that looks like, uh, Thomas. Yeah, the, the SIG store project we kicked off back in March, if you've covered that or familiar with that at all. But we kicked that off back in March of 2021 kind of officially we'd look at code for awhile before then the idea there was to kind of do what let's encrypted, uh, for browsers and Webster, um, security, but for code signing and open source security. So we've always been able to get code signing certificates, but nobody's really using them because they're expensive. They're complicated, just like less encrypted for CAS. They made a free one that was automated and easy to use for developers. And now people do without thinking about it in six stores, we tried to do the same thing for open source and just because of the headlines that were happening and all of the attacks, the momentum has just been incredible. >>Is it a problem that people just have to just get on board with a certain platform or tool or people have too many tools, they abandoned them there, their focus shifts is there. Why what's the, what's the main problem right now? >>Well, I think, you know, part of the problem is just having the tools easy enough for developers are going to want to use them and it's not going to get in our way. I think that's going to be a core piece of our company is really nailing down the developer experience and these toolings and like the co-sign part of SIG store that he was explaining, like it's literally one command line to sign, um, a package, assign a container and then one line to verify on the other side. And then these organizations can put together sort of policies around who they trust and their system like today it's completely black box. They have no idea what they're running and takes a re >>You have to vape to rethink and redo everything pretty much if they want to do it right. If they just kind of fixing the old Europe's sold next solar with basically. >>Yeah. And that's why we're here at cloud native con when people are, you know, the timing is perfect because people are already rethinking how their software gets built as they move it into containers and as they move it into Kubernetes. So it's a perfect opportunity to not just shift to Kubernetes, but to fix the way you build software from this, >>What'd you say is the most prevalent change mindset change of developers. Now, if you had to kind of, kind of look at it and say, okay, current state-of-the-art mindset of a developer versus say a few years ago, is it just that they're doing things modularly with more people? Or is it more new approaches? Is there a, is there a, >>I think it's just paying attention to your building release process and taking it seriously. This has been a theme for, since I've been in software, but you have these very fancy production data centers with physical security and all these levels of, uh, Preston prevention and making sure you can't get in there, but then you've got a Jenkins machine that's three years old under somebody's desk building the code that goes into there. >>It gets socially engineered. It gets at exactly. >>Yeah. It's like the, it's like the movies where they, uh, instead of breaking into jail, they hide in the food delivery truck. And it's, it's that, that's the metaphor that I like perfectly. The fence doesn't work. If your truck, if you open the door once a week, it doesn't matter how big defenses. Yeah. So that's >>Good Dallas funny. >>And I, I think too, like when I used to be an engineer before I joined Google, just like how easy it is to bring in a third party package or something, you know, you need like an image editing software, like just go find one off the internet. And I think, you know, developers are slowly doing a mind shift. They're like, Hey, if I introduce a new dependency, you know, there's going to be, I'm going to have to maintain this thing and understand >>It's a little bit of a decentralized view too. Also, you got a little bit of that. Hey, if you sign it, you own it. If it tracks back to you, okay, you are, your fingerprints are, if you will, or on that chain of >>Custody and custody. >>Exactly. I was going to say, when I saw chain guard at first of course, I thought that my pant leg riding a bike, but then of course the supply chain things coming in, like on a conveyor belt, conveyor, conveyor belt. But that, that whole question of chain of custody, it isn't, it isn't as simple as a process where someone grabs some code, embeds it in, what's going on, pushes it out somewhere else. That's not the final step typically. Yeah. >>So somebody else grabs that one. And does it again, 35 more times, >>The one, how do you verify that? That's yeah, it seems like an obvious issue that needs to be addressed. And yet, apparently from what you're telling us for quite a while, people thought you were a little bit in that, >>And it's not just me. I mean, not so Ken Thompson of bell labs and he wrote the book >>He wrote, yeah, it was a seatbelt that I grew >>Up on in the eighties. He gave a famous lecture called uh, reflections on trusting trust, where he pranked all of his colleagues at bell labs by putting a back door in a compiler. And that put back doors into every program that compiled. And he was so clever. He even put it in, he made that compiler put a backdoor into the disassembler to hide the back door. So he spent weeks and, you know, people just kind of gave up. And I think at that point they were just like, oh, we can't trust any software ever. And just forgot about it and kept going on and living their lives. So this is a 40 year old problem. We only care about it now. >>It's totally true. A lot of these old sacred cows. So I would have done life cycles, not really that relevant anymore because the workflows are changing. These new Bev changes. It's complete dev ops is taken over. Let's just admit it. Right. So if we have ops is taken over now, cloud native apps are hitting the scene. This is where I think there's a structural industry change, not just the community. So with that in mind, how do you guys vector into that in terms of a market entry? What's just thinking around product. Obviously you got a higher, did you guys raise some capital in process? A little bit of a capital raise five, no problem. Todd market, but product wise, you've got to come in, get the beachhead. >>I mean, we're, we're, we're casting a wide net right now and talking to as many customers like we've met a lot of these, these customer potential customers through the communities, you know, that we've been building and we did a supply chain security con helped with that event, this, this Monday to negative one event and solar winds and Citibank were there and talking about their solutions. Um, and so I think, you know, and then we'll narrow it down to like people that would make good partners to work with and figure out how they think they're solving the problem today. And really >>How do you guys feel good? You feel good? Well, we got Jerry Chen coming off from gray lock next round. He would get a term sheet, Jerry, this guy's got some action on it in >>There. Probably didn't reply to him on LinkedIn. >>He's coming out with Kronos for him. He just invested 200 million at CrossFit. So you guys should have a great time. Congratulations on the leap. I know it's comfortable to beat Google, a lot of things to work on. Um, and student startups are super fun too, but not easy. None of the female or, you know, he has done it before, so. Right. Cool. What do you think about today? Did the event here a little bit smaller, more VIP event? What's your takeaway on this? >>It's good to be back in person. Obviously we're meeting, we've been associating with folks over zoom and Google meets for a while now and meeting them in person as I go, Hey, no hard to recognize behind the mask, but yeah, we're just glad to sort of be back out in a little bit of normalization. >>Yeah. How's everything in Austin, everyone everyone's safe and good over there. >>Yeah. It's been a long, long pandemic. Lots of ups and downs, but yeah. >>Got to get the music scene back. Most of these are comes back in the house. Everything's all back to normal. >>Yeah. My hair doesn't normally look like this. I just haven't gotten a haircut since this also >>You're going to do well in this market. You got a term sheet like that. Keep the hair, just to get the money. I think I saw your LinkedIn profile and I was wondering it's like, which version are we going to get? Well, super relevant. Super great topic. Congratulations. Thanks for coming on. Sharing the story. You're in the queue. Great jumper. Dave Nicholson here on the cube date, one of three days we're back in person of course, hybrid event. Cause the cube.net for all more footage and highlights and remote interviews. So stay tuned more coverage after this short break.
SUMMARY :
I'm John farrier host of the cube, but Dave Nicholson, Michael has got great guests here. Thank you for We got great chance to chat with you guys And why you left Google? And it seemed like a good time to start a company and keep your How do you honestly feel, I suppose, feeling super excited. you know, while these companies trying to scramble and then sort of figure out how to better secure themselves. The start-up bug and you back in the start up, but it's the timing's perfect. And it, it really comes down to how the code gets from your team's keyboard, How do you solve the problem? he's been, he felt like a crazy guy in the corner saying, you know, stop building software underneath your desk and Hey, we didn't do, why don't you tell them? Um, so I think, you know, for the last few years we've been working on a number of the headlines that were happening and all of the attacks, the momentum has just been incredible. Is it a problem that people just have to just get on board with a certain platform or tool Well, I think, you know, part of the problem is just having the tools easy enough for developers are going to want to use them the old Europe's sold next solar with basically. So it's a perfect opportunity to not just shift to Kubernetes, but to fix the way you build software from this, What'd you say is the most prevalent change mindset change of developers. and all these levels of, uh, Preston prevention and making sure you can't get in there, but then you've got It gets socially engineered. And it's, it's that, that's the metaphor that I like perfectly. And I think, you know, developers are slowly doing a mind shift. Hey, if you sign it, That's not the final step typically. So somebody else grabs that one. people thought you were a little bit in that, the book a backdoor into the disassembler to hide the back door. So with that in mind, how do you guys vector into that in terms of a market entry? Um, and so I think, you know, and then we'll narrow it down How do you guys feel good? Probably didn't reply to him on LinkedIn. None of the female or, you know, he has done it before, so. It's good to be back in person. Lots of ups and downs, but yeah. Got to get the music scene back. I just haven't gotten a haircut since this also Keep the hair, just to get the money.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Ken Thompson | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
March | DATE | 0.99+ |
March of 2021 | DATE | 0.99+ |
Kim Lewandowski | PERSON | 0.99+ |
Dave Lawrence | PERSON | 0.99+ |
Austin | LOCATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
John farrier | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Jerry | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Michael | PERSON | 0.99+ |
35 more times | QUANTITY | 0.99+ |
200 million | QUANTITY | 0.99+ |
Citibank | ORGANIZATION | 0.99+ |
CrossFit | ORGANIZATION | 0.99+ |
Dan Lorenc | PERSON | 0.99+ |
six stores | QUANTITY | 0.99+ |
Two founders | QUANTITY | 0.99+ |
Thomas | PERSON | 0.99+ |
first | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
a year and a half ago | DATE | 0.98+ |
first step | QUANTITY | 0.98+ |
once a week | QUANTITY | 0.98+ |
ASCII | ORGANIZATION | 0.98+ |
KubeCon | EVENT | 0.98+ |
one line | QUANTITY | 0.98+ |
first level | QUANTITY | 0.98+ |
Chainguard, Inc. | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
five other people | QUANTITY | 0.97+ |
three days | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
hundreds of open libraries | QUANTITY | 0.96+ |
cube.net | OTHER | 0.95+ |
one command | QUANTITY | 0.95+ |
eighties | DATE | 0.95+ |
CloudNativeCon | EVENT | 0.94+ |
Europe | LOCATION | 0.94+ |
SIG | ORGANIZATION | 0.92+ |
hundreds of open source vendors | QUANTITY | 0.91+ |
three years old | QUANTITY | 0.91+ |
bell labs | ORGANIZATION | 0.89+ |
few years ago | DATE | 0.89+ |
one week old | QUANTITY | 0.88+ |
40 year old | QUANTITY | 0.88+ |
last couple of years | DATE | 0.82+ |
pandemi | EVENT | 0.81+ |
chain guard | ORGANIZATION | 0.81+ |
Kronos | ORGANIZATION | 0.78+ |
five years | QUANTITY | 0.78+ |
Kubernetes | TITLE | 0.77+ |
NA 2021 | EVENT | 0.77+ |
last few years | DATE | 0.73+ |
this Monday | DATE | 0.72+ |
a week | QUANTITY | 0.7+ |
con | ORGANIZATION | 0.63+ |
many | QUANTITY | 0.54+ |
Bev | ORGANIZATION | 0.53+ |
native con 2021 | EVENT | 0.52+ |
coop con cloud | ORGANIZATION | 0.51+ |
Dallas | TITLE | 0.49+ |
Jenkins | ORGANIZATION | 0.46+ |
Preston | ORGANIZATION | 0.45+ |
2021 107 John Pisano and Ki Lee
(upbeat music) >> Announcer: From theCUBE studios in Palo Alto in Boston connecting with thought leaders all around the world, this is theCUBE Conversation. >> Well, welcome to theCUBE Conversation here in theCUBE studios in Palo Alto, California. I'm John Furrier, your host. Got a great conversation with two great guests, going to explore the edge, what it means in terms of commercial, but also national security. And as the world goes digital, we're going to have that deep dive conversation around how it's all transforming. We've got Ki Lee, Vice President of Booz Allen's Digital Business. Ki, great to have you. John Pisano, Principal at Booz Allen's Digital Cloud Solutions. Gentlemen, thanks for coming on. >> And thanks for having us, John. >> So one of the most hottest topics, obviously besides cloud computing having the most refactoring impact on business and government and public sector has been the next phase of cloud growth and cloud scale, and that's really modern applications and consumer, and then here for national security and for governments here in the U.S. is military impact. And as digital transformation starts to go to the next level, you're starting to see the architectures emerge where the edge, the IoT edge, the industrial IoT edge, or any kind of edge concept, 5G is exploding, making that much more of a dense, more throughput for connectivity with wireless. You got Amazon with Snowball, Snowmobile, all kinds of ways to deploy technology, that's IT like and operational technologies. It's causing quite a cloud operational opportunity and disruption, so I want to get into it. Ki, let's start with you. I mean, we're looking at an architecture that's changing both commercial and public sector with the edge. What are the key considerations that you guys see as people have to really move fast in this new architecture of digital? >> Yeah, John, I think it's a great question. And if I could just share our observation on why we even started investing in edge. You mentioned the cloud, but as we've reflected upon kind of the history of IT, then you take a look from mainframes to desktops to servers to cloud to mobile and now IoT, what we observed was that industry investing in infrastructure led to kind of an evolution of IT, right? So as you mentioned, with industry spending billions on IoT and edge, we just feel that that's going to be the next evolution. If you take a look at, you mentioned 5G, I think 5G will be certainly an accelerator to edge because of the resilience, the lower latency and so forth. But taking a look at what's happening in space, you mentioned space earlier as well, right, and what Starlink is doing by putting satellites to actually provide transport into the space, we're thinking that that actually is going to be the next ubiquitous thing. Once transport becomes ubiquitous, just like cloud allows storage to be ubiquitous. We think that the next generation internet will be space-based. So when you think about it, connected, it won't be connected servers per se, it will be connected devices. >> John: Yeah, yeah. >> That's kind of some of the observations and why we've been really focusing on investing in edge. >> I want to come back to that piece around space and edge and bring it from a commercial and then also tactical architecture in a minute 'cause there's a lot to unpack there, role of open source, modern application development, software and hardware supply chains, all are core issues that are going to emerge. But I want to get with John real quick on cloud impact, because you think about 5G and the future of work or future of play, you've got people, right? So whether you're at a large concert like Coachella or a 49ers or Patriots game or Redskins game if you're in the D.C. area, you got people there, of congestion, and now you got devices now serving those people. And that's their play, people at work, whether it's a military operation, and you've got work, play, tactical edge things. How is cloud connecting? 'Cause this is like the edge has never been kind of an IT thing. It's been more of a bandwidth or either telco or something else operationally. What's the cloud at scale, cloud operations impact? >> Yeah, so if you think about how these systems are architected and you think about those considerations that Ki kind of touched on, a lot of what you have to think about now is what aspects of the application reside in the cloud, where you tend to be less constrained. And then how do you architect that application to move out towards the edge, right? So how do I tier my application? Ultimately, how do I move data and applications around the ecosystem? How do I need to evolve where my application stages things and how that data and those apps are moved to each of those different tiers? So when we build a lot of applications, especially if they're in the cloud, they're built with some of those common considerations of elasticity, scalability, all those things; whereas when you talk about congestion and disconnected operations, you lose a lot of those characteristics, and you have to kind of rethink that. >> Ki, let's get into the aspect you brought up, which is space. And then I was mentioning the tactical edge from a military standpoint. These are use cases of deployments, and in fact, this is how people have to work now. So you've got the future of work or play, and now you've got the situational deployments, whether it's a new tower of next to a stadium. We've all been at a game or somewhere or a concert where we only got five bars and no connectivity. So we know what that means. So now you have people congregating in work or play, and now you have a tactical deployment. What's the key things that you're seeing that it's going to help make that better? Are there any breakthroughs that you see that are possible? What's going on in your view? >> Yeah, I mean, I think what's enabling all of this, again, one is transport, right? So whether it's 5G to increase the speed and decrease the latency, whether it's things like Starlink with making transport and comms ubiquitous, that tied with the fact that ships continue to get smaller and faster, right? And when you're thinking about tactical edge, those devices have limited size, weight, power conditions and constraints. And so the software that goes on them has to be just as lightweight. And that's why we've actually partnered with SUSE and what they've done with K3s to do that. So I think those are some of the enabling technologies out there. John, as you've kind of alluded to it, there are additional challenges as we think about it. We're not, it's not a simple transition and monetization here, but again, we think that this will be the next major disruption. >> What do you guys think, John, if you don't mind weighing in too on this as modern application development happens, we just were covering CloudNativeCon and KubeCon, DockerCon, containers are very popular. Kubernetes is becoming super great. As you look at the telco landscape where we're kind of converging this edge, it has to be commercially enterprise grade. It has to have that transit and transport that's intelligent and all these new things. How does open source fit into all this? Because we're seeing open source becoming very reliable, more people are contributing to open source. How does that impact the edge in your opinion? >> So from my perspective, I think it's helping accelerate things that traditionally maybe may have been stuck in the traditional proprietary software confines. So within our mindset at Booz Allen, we were very focused on open architecture, open based systems, which open source obviously is an aspect of that. So how do you create systems that can easily interface with each other to exchange data, and how do you leverage tools that are available in the open source community to do that? So containerization is a big drive that is really going throughout the open source community. And there's just a number of other tools, whether it's tools that are used to provide basic services like how do I move code through a pipeline all the way through? How do I do just basic hardening and security checking of my capabilities? Historically, those have tend to be closed source type apps, whereas today you've got a very broad community that's able to very quickly provide and develop capabilities and push it out to a community that then continues to adapt and add to it or grow that library of stuff. >> Yeah, and then we've got trends like Open RAN. I saw some Ground Station for the AWS. You're starting to see Starlink, you mentioned. You're bringing connectivity to the masses. What is that going to do for operators? Because remember, security is a huge issue. We talk about security all the time. Where does that kind of come in? Because now you're really OT, which has been very purpose-built kind devices in the old IoT world. As the new IoT and the edge develop, you're going to need to have intelligence. You're going to be data-driven. There is an open source impact key. So, how, if I'm a senior executive, how do I get my arms around this? I really need to think this through because the security risks alone could be more penetration areas, more surface area. >> Right. That's a great question. And let me just address kind of the value to the clients and the end users in the digital battlefield as our warriors to increase survivability and lethality. At the end of the day from a mission perspective, we know we believe that time's a weapon. So reducing any latency in that kind of observe, orient, decide, act OODA loop is value to the war fighter. In terms of your question on how to think about this, John, you're spot on. I mean, as I've mentioned before, there are various different challenges, one, being the cyber aspect of it. We are absolutely going to be increasing our attack surface when you think about putting processing on edge devices. There are other factors too, non-technical that we've been thinking about s we've tried to kind of engender and kind of move to this kind of edge open ecosystem where we can kind of plug and play, reuse, all kind of taking the same concepts of the open-source community and open architectures. But other things that we've considered, one, workforce. As you mentioned before, when you think about these embedded systems and so forth, there aren't that many embedded engineers out there. But there is a workforce that are digital and software engineers that are trained. So how do we actually create an abstraction layer that we can leverage that workforce and not be limited by some of the constraints of the embedded engineers out there? The other thing is what we've, in talking with several colleagues, clients, partners, what people aren't thinking about is actually when you start putting software on these edge devices in the billions, the total cost of ownership. How do you maintain an enterprise that potentially consists of billions of devices? So extending the standard kind of DevSecOps that we move to automate CI/CD to a cloud, how do we move it from cloud to jet? That's kind of what we say. How do we move DevSecOps to automate secure containers all the way to the edge devices to mitigate some of those total cost of ownership challenges. >> It's interesting, as you have software defined, this embedded system discussion is hugely relevant and important because when you have software defined, you've got to be faster in the deployment of these devices. You need security, 'cause remember, supply chain on the hardware side and software in that too. >> Absolutely. >> So if you're going to have a serviceability model where you have to shift left, as they say, you got to be at the point of CI/CD flows, you need to be having security at the time of coding. So all these paradigms are new in Day-2 operations. I call it Day-0 operations 'cause it should be in everyday too. >> Yep. Absolutely. >> But you've got to service these things. So software supply chain becomes a very interesting conversation. It's a new one that we're having on theCUBE and in the industry Software supply chain is a superly relevant important topic because now you've got to interface it, not just with other software, but hardware. How do you service devices in space? You can't send a break/fix person in space. (chuckles) Maybe you will soon, but again, this brings up a whole set of issues. >> No, so I think it's certainly, I don't think anyone has the answers. We sure don't have all the answers but we're very optimistic. If you take a look at what's going on within the U.S. Air Force and what the Chief Software Officer Nic Chaillan and his team, and we're a supporter of this and a plankowner of Platform One. They were ahead of the curve in kind of commoditizing some of these DevSecOps principles in partnership with the DoD CIO and that shift left concept. They've got a certified and accredited platform that provides that DevSecOps. They have an entire repository in the Iron Bank that allows for hardened containers and reciprocity. All those things are value to the mission and around the edge because those are all accelerators. I think there's an opportunity to leverage industry kind of best practices as well and patterns there. You kind of touched upon this, John, but these devices honestly just become firmware. The software is just, if the devices themselves just become firmware , you can just put over the wire updates onto them. So I'm optimistic. I think all the piece parts are taking place across industry and in the government. And I think we're primed to kind of move into this next evolution. >> Yeah. And it's also some collaboration. What I like about, why I'm bringing up the open source angle and I think this is where I think the major focus will shift to, and I want to get your reaction to it is because open source is seeing a lot more collaboration. You mentioned some of the embedded devices. Some people are saying, this is the weakest link in the supply chain, and it can be shored up pretty quickly. But there's other data, other collective intelligence that you can get from sharing data, for instance, which hasn't really been a best practice in the cybersecurity industry. So now open source, it's all been about sharing, right? So you got the confluence of these worlds colliding, all aspects of culture and Dev and Sec and Ops and engineering all coming together. John, what's your reaction to that? Because this is a big topic. >> Yeah, so it's providing a level of transparency that historically we've not seen, right? So in that community, having those pipelines, the results of what's coming out of it, it's allowing anyone in that life cycle or that supply chain to look at it, see the state of it, and make a decision on, is this a risk I'm willing to take or not? Or am I willing to invest and personally contribute back to the community to address that because it's important to me and it's likely going to be important to some of the others that are using it? So I think it's critical, and it's enabling that acceleration and shift that I talked about, that now that everybody can see it, look inside of it, understand the state of it, contribute to it, it's allowing us to break down some of the barriers that Ki talked about. And it reinforces that excitement that we're seeing now. That community is enabling us to move faster and do things that maybe historically we've not been able to do. >> Ki, I'd love to get your thoughts. You mentioned battlefield, and I've been covering a lot of the tactical edge around the DOD's work. You mentioned about the military on the Air Force side, Platform One, I believe, was from the Air Force work that they've done, all cloud native kind of directions. But when you talk about a war field, you talk about connectivity. I mean, who controls the DNS in Taiwan, or who controls the DNS in Korea? I mean, we have to deploy, you've got to stand up infrastructure. How about agility? I mean, tactical command and control operations, this has got to be really well done. So this is not a trivial thing. >> No. >> How are you seeing this translate into the edge innovation area? (laughs) >> It's certainly not a trivial thing, but I think, again, I'm encouraged by how government and industry are partnering up. There's a vision set around this joint all domain command control, JADC2. And then all the services are getting behind that, are looking into that, and this vision of this military, internet of military things. And I think the key thing there, John, as you mentioned, it's not just the connected of the sensors, which requires the transport again, but also they have to be interoperable. So you can have a bunch of sensors and platforms out there, they may be connected, but if they can't speak to one another in a common language, that kind of defeats the purpose and the mission value of that sensor or shooter kind of paradigm that we've been striving for for ages. So you're right on. I mean, this is not a trivial thing, but I think over history we've learned quite a bit. Technology and innovation is happening at just an amazing rate where things are coming out in months as opposed to decades as before. I agree, not trivial, but again, I think there are all the piece parts in place and being put into place. >> I think you mentioned earlier that the personnel, the people, the engineers that are out there, not enough, more of them coming in. I think now the appetite and the provocative nature of this shift in tech is going to attract a lot of people because the old adage is these are hard problems attracts great people. You got in new engineering, SRE like scale engineering. You have software development, that's changing, becoming much more robust and more science-driven. You don't have to be just a coder as a software engineer. You could be coming at it from any angle. So there's a lot more opportunities from a personnel standpoint now to attract great people, and there's real hard problems to solve, not just security. >> Absolutely. Definitely. I agree with that 100%. I would also contest that it's an opportunity for innovators. We've been thinking about this for some time, and we think there's absolute value from various different use cases that we've identified, digital battlefield, force protection, disaster recovery, and so forth. But there are use cases that we probably haven't even thought about, even from a commercial perspective. So I think there's going to be an opportunity just like the internet back in the mid '90s for us to kind of innovate based on this new kind of edge environment. >> It's a revolution. New leadership, new brands are going to emerge, new paradigms, new workflows, new operations, clearly great stuff. I want to thank you guys for coming on. I also want to thank Rancher Labs for sponsoring this conversation. Without their support, we wouldn't be here. And now they were acquired by SUSE. We've covered their event with theCUBE virtual last year. What's the connection with those guys? Can you guys take a minute to explain the relationship with SUSE and Rancher? >> Yeah. So it's actually it's fortuitous. And I think we just, we got lucky. There's two overall aspects of it. First of all, we are both, we partner on the Platform One basic ordering agreement. So just there we had a common mentality of DevSecOps. And so there was a good partnership there, but then when we thought about we're engaging it from an edge perspective, the K3s, right? I mean, they're a leader from a container perspective obviously, but the fact that they are innovators around K3s to reduce that software footprint, which is required on these edge devices, we kind of got a twofer there in that partnership. >> John, any comment on your end? >> Yeah, I would just amplify, the K3s aspects in leveraging the containers, a lot of what we've seen success in when you look at what's going on, especially on that tactical edge around enabling capabilities, containers, and the portability it provides makes it very easy for us to interface and integrate a lot of different sensors to close the OODA loop to whoever is wearing or operating that a piece of equipment that the software is running on. >> Awesome, I'd love to continue the conversation on space and the edge and super great conversation to have you guys on. Really appreciate it. I do want to ask you guys about the innovation and the opportunities of this new shift that's happening as the next big thing is coming quickly. And it's here on us and that's cloud, I call it cloud 2.0, the cloud scale, modern software development environment, edge with 5G changing the game. Ki, I completely agree with you. And I think this is where people are focusing their attention from startups to companies that are transforming and re-pivoting or refactoring their existing assets to be positioned. And you're starting to see clear winners and losers. There's a pattern emerging. You got to be in the cloud, you got to be leveraging data, you got to be horizontally scalable, but you got to have AI machine learning in there with modern software practices that are secure. That's the playbook. Some people are making it. Some people are not getting there. So I'd ask you guys, as telcos become super important and the ability to be a telco now, we just mentioned standing up a tactical edge, for instance. Launching a satellite, a couple of hundred K, you can launch a CubeSat. That could be good and bad. So the telco business is changing radically. Cloud, telco cloud is emerging as an edge phenomenon with 5G, certainly business commercial benefits more than consumer. How do you guys see the innovation and disruption happening with telco? >> As we think through cloud to edge, one thing that we realize, because our definition of edge, John, was actually at the point of data collection on the sensor themselves. Others' definition of edge is we're a little bit further back, what we call it the edge of the IT enterprise. But as we look at this, we realize that you needed this kind of multi echelon environment from your cloud to your tactical clouds where you can do some processing and then at the edge of themselves. Really at the end of the day, it's all about, I think, data, right? I mean, everything we're talking about, it's still all about the data, right? The AI needs the data, the telco is transporting the data. And so I think if you think about it from a data perspective in relationship to the telcos, one, edge will actually enable a very different paradigm and a distributed paradigm for data processing. So, hey, instead of bringing the data to some central cloud which takes bandwidth off your telcos, push the products to the data. So mitigate what's actually being sent over those telco lines to increase the efficiencies of them. So I think at the end of the day, the telcos are going to have a pretty big component to this, even from space down to ground station, how that works. So the network of these telcos, I think, are just going to expand. >> John, what's your perspective? I mean, startups are coming out. The scalability, speed of innovation is a big factor. The old telco days had, I mean, months and years, new towers go up and now you got a backbone. It's kind of a slow glacier pace. Now it's under siege with rapid innovation. >> Yeah, so I definitely echo the sentiments that Ki would have, but I would also, if we go back and think about the digital battle space and what we've talked about, faster speeds being available in places it's not been before is great. However, when you think about facing an adversary that's a near-peer threat, the first thing they're going to do is make it contested, congested, and you have to be able to survive. While yes, the pace of innovation is absolutely pushing comms to places we've not had it before, we have to be mindful to not get complacent and over-rely on it, assuming it'll always be there. 'Cause I know in my experience wearing the uniform, and even if I'm up against an adversary, that's the first thing I'm going to do is I'm going to do whatever I can to disrupt your ability to communicate. So how do you take it down to that lowest level and still make that squad, the platoon, whatever that structure is, continue survivable and lethal. So that's something I think, as we look at the innovations, we need to be mindful of that. So when I talk about how do you architect it? What services do you use? Those are all those things that you have to think about. What if I lose it at this echelon? How do I continue the mission? >> Yeah, it's interesting. And if you look at how companies have been procuring and consuming technology, Ki, it's been like siloed. "Okay, we've got a workplace workforce project, and we have the tactical edge, and we have the siloed IT solution," when really work and play, whether it's work here in John's example, is the war fighter. And so his concern is safety, his life and protection. >> Yeah. >> The other department has to manage the comms, (laughs) and so they have to have countermeasures and contingencies ready to go. So all this is, they all integrate it now. It's not like one department. It's like it's together. >> Yeah. John, I love what you just said. I mean, we have to get away from this siloed thinking not only within a single organization, but across the enterprise. From a digital battlefield perspective, it's a joint fight, so even across these enterprise of enterprises, So I think you're spot on. We have to look horizontally. We have to integrate, we have to inter-operate, and by doing that, that's where the innovation is also going to be accelerated too, not reinventing the wheel. >> Yeah, and I think the infrastructure edge is so key. It's going to be very interesting to see how the existing incumbents can handle themselves. Obviously the towers are important. 5G obviously, that's more deployments, not as centralized in terms of the spectrum. It's more dense. It's going to create more connectivity options. How do you guys see that impacting? Because certainly more gear, like obviously not the centralized tower, from a backhaul standpoint but now the edge, the radios themselves, the wireless transit is key. That's the real edge here. How do you guys see that evolving? >> We're seeing a lot of innovations actually through small companies who are really focused on very specific niche problems. I think it's a great starting point because what they're doing is showing the art of the possible. Because again, we're in a different environment now. There's different rules. There's different capabilities. But then we're also seeing, you mentioned earlier on, some of the larger companies, the Amazons, the Microsofts, also investing as well. So I think the merge of the, you know, or the unconstrained or the possible by these small companies that are just kind of driving innovations supported by the maturity and the heft of these large companies who are building out these hardened kind of capabilities, they're going to converge at some point. And that's where I think we're going to get further innovation. >> Well, I really appreciate you guys taking the time. Final question for you guys, as people are watching this, a lot of smart executives and teams are coming together to kind of put the battle plans together for their companies as they transition from old to this new way, which is clearly cloud-scale, role of data. We hit out all the key points I think here. As they start to think about architecture and how they deploy their resources, this becomes now the new boardroom conversation that trickles down and includes everyone, including the developers. The developers are now going to be on the front lines. Mid-level managers are going to be integrated in as well. It's a group conversation. What are some of the advice that you would give to folks who are in this mode of planning architecture, trying to be positioned to come out of this pandemic with a massive growth opportunity and to be on the right side of history? What's your advice? >> It's such a great question. So I think you touched upon it. One is take the holistic approach. You mentioned architectures a couple of times, and I think that's critical. Understanding how your edge architectures will let you connect with your cloud architecture so that they're not disjointed, they're not siloed. They're interoperable, they integrate. So you're taking that enterprise approach. I think the second thing is be patient. It took us some time to really kind of, and we've been looking at this for about three years now. And we were very intentional in assessing the landscape, how people were discussing around edge and kind of pulling that all together. But it took us some time to even figure it out, hey, what are the use cases? How can we actually apply this and get some ROI and value out for our clients? So being a little bit patient in thinking through kind of how we can leverage this and potentially be a disruptor. >> John, your thoughts on advice to people watching as they try to put the right plans together to be positioned and not foreclose any future value. >> Yeah, absolutely. So in addition to the points that Ki raised, I would, number one, amplify the fact of recognize that you're going to have a hybrid environment of legacy and modern capabilities. And in addition to thinking open architectures and whatnot, think about your culture, the people, your processes, your techniques and whatnot, and your governance. How do you make decisions when it needs to be closed versus open? Where do you invest in the workforce? What decisions are you going to make in your architecture that drive that hybrid world that you're going to live in? All those recipes, patience, open, all that, that I think we often overlook the cultural people aspect of upskilling. This is a very different way of thinking on modern software delivery. How do you go through this lifecycle? How's security embedded? So making sure that's part of that boardroom conversation I think is key. >> John Pisano, Principal at Booz Allen Digital Cloud Solutions, thanks for sharing that great insight. Ki Lee, Vice President at Booz Allen Digital Business. Gentlemen, great conversation. Thanks for that insight. And I think people watching are going to probably learn a lot on how to evaluate startups to how they put their architecture together. So I really appreciate the insight and commentary. >> Thank you. >> Thank you, John. >> Okay. I'm John Furrier. This is theCUBE Conversation. Thanks for watching. (upbeat music)
SUMMARY :
leaders all around the world, And as the world goes digital, So one of the most hottest topics, kind of the history of IT, That's kind of some of the observations 5G and the future of work and those apps are moved to and now you have a tactical deployment. and decrease the latency, How does that impact the in the open source community to do that? What is that going to do for operators? and kind of move to this supply chain on the hardware at the time of coding. and in the industry and around the edge because and I think this is where I think and it's likely going to be important of the tactical edge that kind of defeats the earlier that the personnel, back in the mid '90s What's the connection with those guys? but the fact that they and the portability it and the ability to be a telco now, push the products to the data. now you got a backbone. and still make that squad, the platoon, in John's example, is the war fighter. and so they have to have countermeasures We have to integrate, we It's going to be very interesting to see and the heft of these large companies and to be on the right side of history? and kind of pulling that all together. advice to people watching So in addition to the So I really appreciate the This is theCUBE Conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
John Pisano | PERSON | 0.99+ |
Ki Lee | PERSON | 0.99+ |
Nic Chaillan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
SUSE | ORGANIZATION | 0.99+ |
Starlink | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rancher | ORGANIZATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five bars | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Microsofts | ORGANIZATION | 0.99+ |
Korea | LOCATION | 0.99+ |
Coachella | EVENT | 0.99+ |
Boston | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Booz Allen | ORGANIZATION | 0.99+ |
Rancher Labs | ORGANIZATION | 0.99+ |
Ki | PERSON | 0.99+ |
U.S. Air Force | ORGANIZATION | 0.99+ |
Snowmobile | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Snowball | ORGANIZATION | 0.99+ |
last year | DATE | 0.98+ |
CubeSat | COMMERCIAL_ITEM | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Booz Allen Digital Cloud Solutions | ORGANIZATION | 0.98+ |
mid '90s | DATE | 0.98+ |
two great guests | QUANTITY | 0.98+ |
telcos | ORGANIZATION | 0.98+ |
Iron Bank | ORGANIZATION | 0.97+ |
each | QUANTITY | 0.97+ |
K3s | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.97+ |
single organization | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
49ers | ORGANIZATION | 0.97+ |
Booz Allen Digital Business | ORGANIZATION | 0.96+ |
D.C. | LOCATION | 0.96+ |
billions | QUANTITY | 0.96+ |
one department | QUANTITY | 0.96+ |
billions of devices | QUANTITY | 0.96+ |
about three years | QUANTITY | 0.95+ |
CloudNativeCon | TITLE | 0.95+ |
second thing | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
U.S. | LOCATION | 0.94+ |
Patriots | ORGANIZATION | 0.93+ |
one | QUANTITY | 0.93+ |
Kubernetes | TITLE | 0.92+ |
Redskins | ORGANIZATION | 0.9+ |
DockerCon | TITLE | 0.89+ |
Chief Software Officer | PERSON | 0.88+ |
Open RAN | TITLE | 0.87+ |
two overall aspects | QUANTITY | 0.87+ |
One | QUANTITY | 0.87+ |
DevSecOps | TITLE | 0.86+ |
KubeCon | TITLE | 0.86+ |
4-video test
>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.
SUMMARY :
bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Exxon Mobil | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Sean Hagar | PERSON | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Tatsuya Nagamoto | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
six systems | QUANTITY | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Al Qaeda | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Burns | PERSON | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
0.14% | QUANTITY | 0.99+ |
48 core | QUANTITY | 0.99+ |
0.5 microseconds | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
first implementation | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
Army Research Office | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
2000 spins | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
three set | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
third part | QUANTITY | 0.99+ |
Department of Applied Physics | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
85,900 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
131 | QUANTITY | 0.99+ |
14,233 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
13,580 | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
one microseconds | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
two identical photons | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
70 years ago | DATE | 0.99+ |
Iraq | LOCATION | 0.99+ |
one experiment | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Amir Safarini Nini | PERSON | 0.99+ |
Saddam | PERSON | 0.99+ |
Sheng Liang, Rancher Labs & Murli Thirumale, Portworx | KubeCon + CloudNativeCon Europe - Virtual
>>from around the globe. It's the Cube with coverage of Coop con and cloud, native con Europe 2020 Virtual brought to you by Red Hat, The Cloud Native Computing Foundation and its ecosystem partners >>Welcome back. This is the Cube coverage of Cube Con Cloud, native con, the European show for 2020. I'm your host to Minuteman. And when we talk about the container world, we talk about what's happening in cloud. Native storage has been one of those sticking points. One of those things that you know has been challenging, that we've been looking to mature and really happy to welcome back to the program two of our cube alumni to give us the update on the state of storage for the container world. Both of them are oh, founders and CEOs. First of all, we have Xiang Yang from Rancher Labs, of course, was recently acquired by Sue Save it and the intention to acquire on and also joining us from early the relay. Who is with port works? Shang Amerli. Thanks so much for joining us. Thank you. Thank you. Alright. So early. I actually I'm going to start with you just cause you know we've seen, you know, a couple of waves of companies working on storage. In this environment, we know storage is difficult. Um, And when we change how we're building things, there's architectural things that can happen. Eso maybe if you could just give us a snapshot, you know, Port works, you know, was created to help unpack this. You know, straight on here in 2020 you know, where you see things in the overall kind of computer storage landscape? >>Absolutely. Still, before I kind of jump into port works. I just want to take a minute to publicly congratulate the the whole rancher team, and and Shang and Shannon And will China have known those folks for a while there? They're kind of true entrepreneurs. They represent the serial entrepreneur spirit that that so many folks know in the valley, and so, you know, great outcome for them. We're very happy for them and ah, big congrats and shout out to the whole team. What works is is a little over five years old, and we've been kind of right from the inception of the company recognized that to put containers in production, you're gonna have to solve, not just the orchestration problem. But the issue of storage and data orchestration and so in a natural kubernetes orchestrates containers and what works orchestrates storage and data. And more specifically, by doing that, what we enable is enterprises to be able to take APS that are containerized into production at scale and and have high availability. Disaster recovery, backup all of the things that for decades I t has had to do and has done to support application, reliability and availability. But essentially we're doing it for purpose with the purpose build solution for containerized workloads. >>Alright, shaming. Of course, storage is a piece of the overall puzzle that that ranchers trying to help with. Maybe if you could just refresh our audience on Longhorn, which your organization has its open source. It's now being managed by the CN. CF is my understanding. So help us bring Longhorn into the discussion >>thanks to. So I'm really glad to be here. We've I think rancher and port work started about the same time, and we started with a slightly different focus. More is exactly right to get containers going, you really need both so that the computer angle orchestrating containers as well as orchestrating the storage and the data. So rancher started with, ah, it's slightly stronger focus on orchestrating containers themselves, but pretty quickly, we realized, as adoption of containers grow, we really need it to be able to handle ah, storage feather. And like any new technology, you know, uh, Kubernetes and containers created some interesting new requirements and opportunities, and at the time, really, they weren't. Ah, a lot of good technologies available, you know, technologies like rook and SEF at the time was very, very premature, I think, Ah, the You know, we actually early on try to incorporate ah, the cluster technology. And it was just it was just not easy. And And at the time I think port Works was, ah, very busy developing. Ah, what turned out to be there flagship product, which we end up, end up, uh, partnering very, very closely. But but early on, we really had no choice but to start developing our own storage technology. So Long horn. As a piece of container storage technology, it's actually almost as oh, there's rancher itself. When about funding engineers, we hired he he ended up, you know, working on it and Then over the years, you know the focus shift that I think the original version was written in C plus plus, and over the years it's now being completely re written in Golan. It was originally written more for Docker workload. Now, of course, everything is kubernetes centric. And last year we you know, we we decided to donate the Longhorn Open Source project to CN CF. And now it's a CN CF sandbox project, and the adoption is just growing really quickly. And just earlier this year, we we finally ah decided to we're ready to offer a commercial support for it. So So that's that's where rancher is. And with longhorn and container storage technology. >>Yeah, it has been really interesting to watch in this ecosystem. A couple of years ago, one of the Q con shows I was talking to people coming out of the Believe It was the Sigs, the special interest group for storage, and it was just like, Wow, it was heated. Words were, you know, back and forth. There's not a lot of agreement there. Anybody that knows the storage industry knows that you know standards in various ways of doing things often are contentious and there's there's differences of opinion. Look at the storage industry. You know, there's a reason why there's so many different solutions out there. So maybe it love to hear from early. From your standpoint, things are coming to get a little bit more. There are still a number of options out there. So you know, why is this kind of coop petition? I actually good for the industry? >>Yeah, I think this is a classic example of Coop petition. Right? Let's let's start with the cooperation part right? The first part of time the you know, the early days of CN, CF, and even sort of the Google Communities team, I think, was really very focused on compute and and subsequent years. In the last 34 years, there's been a greater attention to making the whole stack works, because that's what it's going to take to take a the enterprise class production and put it in, you know, enterprise class application and put it in production. So extensions like C and I for networking and CS I container storage interface. We're kind of put together by a working group and and ah ah you know both both in the CN CF, but also within the kubernetes Google community. That's you talked about six storage as an example. And, you know, as always happens, right? Like it It looks a little bit in the early days. Like like a polo game, right where folks are really? Ah, you know, seemingly, uh, you know, working with each other on on top of the pool. But underneath they're kicking each other furiously. But that was a long time back, and we've graduated from then into really cooperating. And I think it's something we should all be proud of. Where now the CS I interface is really a A really very, very strong and complete solution tow, allowing communities to orchestrate storage and data. So it's really strengthened both communities and the kubernetes ecosystem. Now the competition part. Let's kind of spend. I want to spend a couple of minutes on that too, right? Um, you know, one of the classic things that people sometimes confuse is the difference between an overlay and an interface. CSC is wonderful because it defines how the two layers off essentially kind of old style storage. You know, whether it's a san or ah cloud, elastic storage bucket or all of those interact with community. So the the definition of that interface kind of lay down some rules and parameters for how that interaction should happen. However, you still always need an overlay like Port Works that that actually drives that interface and enables Kubernetes to actually manage that storage. And that's where the competition is. And, you know, she mentioned stuff and bluster and rook and kind of derivatives of those. And I think those have been around really venerable and and really excellent products for born in a different era for a different time open stack, object storage and all of that not really meant for kind of primary workloads. And they've been they've been trying to be adapted for, for for us, for this kind of workload. Port Works is really a built from right from the inception to be designed for communities and for kubernetes workloads at enterprise scale. And so I think, you know, as I as I look at the landscape, we welcome the fact that there are so many more people acknowledging that there is a vital need for data orchestration on kubernetes right, that that's why everybody and their brother now has a CS I interface. However, I think there's a big difference between having an interface. This is actually having the software that provides the functionality for H. A, D R. And and for backup, as as the kind of life cycle matures and doing it not just at scale, but in a way that allows kind of really significant removal or reduction off the storage admin role and replaces it with self service that is fully automated within communities. Yeah, if I >>can, you know, add something that that I completely agree. I mean, over the Longhorns been around for a long time. Like I said, I'm really happy that over the years it hasn't really impacted our wonderful collaborative partnership with what works. I mean, Poll works has always been one of our premier partners. We have a lot of, ah, common customers in this fight. I know these guys rave about what works. I don't think they'll ever get out for works. Ah, home or not? Uh huh. Exactly. Like Morissette, you know, in the in the storage space, there's interface, which a lot of different implementations can plugging, and that's kind of how rancher works. So we always tell people Rancher works with three types of storage implementations. One is let we call legacy storage. You know, your netapp, your DMC, your pure storage and those are really solid. But they were not suddenly not designed to work with containers to start with, but it doesn't matter. They've all written CS I interfaces that would enable containers to take advantage of. The second type is some of the cloud a block storage or file storage services like EBS, GFS, Google Cloud storage and support for these storage back and the CS I drivers practically come with kubernetes itself, so those are very well supported. But there's still a huge amount of opportunities for the third type of you know, we call container Native Storage. So that is where Port Works and the Longhorn and other solutions like open EBS storage OS. All these guys fitting is a very vibrant ecosystem of innovation going on there. So those solutions are able to create basically reliable storage from scratch. You know, when you from from just local disks and they're actually also able to add a lot of value on top of whatever traditional or cloud based, persistent storage you already have. So so the whole system, the whole ecosystem, is developing very quickly. A lot of these solutions work with each other, and I think to me it's really less of a competition or even Coop petition. It's really more off raising the bar for for the capabilities so that we can accelerate the amount of workload that's been moved onto this wonderful kubernetes platform in the end of the benefit. Everyone, >>Well, I appreciate you both laying out some of the options, you know, showing just a quick follow up on that. I think back if you want. 15 years ago was often okay. I'm using my GMC for my block. I'm using my netapp for the file. I'm wondering in the cloud native space, if we expect that you might have multiple different data engine types in there you mentioned you know, I might want port works for my high performance. You said open EBS, very popular in the last CN CF survey might be another one there. So is do we think some of it is just kind of repeating itself that storage is not monolithic and in a micro service architecture. You know, different environments need different storage requirements. >>Yeah, I mean quick. I love to hear more is view as well, especially about you know, about how the ecosystem is developing. But from my perspective, just just the range of capabilities that's now we expect out of storage vendors or data management vendors is just increased tremendously. You know, in the old days, if you can store blocks to object store file, that's it. Right. So now it's this is just table stakes. Then then what comes after that? There will be 345 additional layers of requirements come all the way from backup, restore the our search indexing analytics. So I really think all of this potentially off or in the in the bucket of the storage ecosystem, and I just can't wait to see how this stuff will play out. I think we're still very, very early stages, and and there, you know what? What, what what containers did is they made fundamentally the workload portable, but the data itself still holds a lot of gravity. And then just so much work to do to leverage the fundamental work load portability. Marry that with some form of universal data management or data portability. I think that would really, uh, at least the industry to the next level. Marie? >>Yeah. Shanghai Bean couldn't. Couldn't have said it better. Right? Let me let me let me kind of give you Ah, sample. Right. We're at about 160 plus customers now, you know, adding several by the month. Um, just with just with rancher alone, right, we are. We have common customers in all common video expedient Roche March X, Western Asset Management. You know, charter communications. So we're in production with a number off rancher customers. What are these customers want? And why are they kind of looking at a a a Port works class of solution to use, You know, Xiang's example of the multiple types, right? Many times, people can get started with something in the early days, which has a CS I interface with maybe say, $10 or 8 to 10 nodes with a solution that allows them to at least kind of verify that they can run the stack up and down with, say, you know, a a rancher type orchestrator, workloads that are containerized on and a network plug in and a storage plugging. But really, once they start to get beyond 20 notes or so, then there are problems that are very, very unique to containers and kubernetes that pop up that you don't see in a in a non containerized environment, right? Some. What are some of these things, right? Simple examples are how can you actually run 10 to hundreds of containers on a server, with each one of those containers belonging to a different application and having different requirements? How do you actually scale? Not to 16 nodes, which is sort of make typically, maybe Max of what a San might go to. But hundreds and thousands of notes, like many of our customers, are doing like T Mobile Comcast. They're running this thing at 600 thousands of notes or scale is one issue. Here is a critical critical difference that that something that's designed for Kubernetes does right. We are providing all off the storage functions that Shang just described at container granted, granularity versus machine granularity. One way to think about this is the old Data center was in machine based construct. Construct everything you know. VM Ware is the leader, sort of in that all of the way. You think of storage as villains. You think of compute and CPUs, everything. Sub sub nets, right? All off. Traditional infrastructure is very, very machine centric. What kubernetes and containers do is move it into becoming an app defined control plane, right? One of the things were super excited about is the fact that Kubernetes is really not just a container orchestrator, but actually a orchestrator for infrastructure in an app defined way. And by doing that, they have turned, uh, you know, control off the infrastructure via communities over to a kubernetes segment. The same person who uses rancher uses port works at NVIDIA, for example to manage storage as they use it, to manage the compute and to manage containers. And and that's marvellous, because now what has happened is this thing is now fully automated at scale and and actually can run without the intervention off a storage admin. No more trouble tickets, right? No more requests to say, Hey, give me another 20 terabytes. All of that happens automatically with the solution like port works. And in fact, if you think about it in the world of real time services that we're all headed towards right Services like uber now are expected in enterprises machine learning. Ai all of these things analytics that that change talk about are things that you expect to run in a fully automated way across vast amounts of data that are distributed sometimes in the edge. And you can't do that unless you're fully automated and and not really the storage admin intervention. And that's kind of the solution that we provide. >>Alright, well, we're just about out of time. If I could just last piece is, you know, early and saying to talk about where we are with long for and what we should expect to see through the rest of this year and get some early for you to you know, what differentiates port works from Just, you know, the open source version. So And maybe if we start with just kind of long or in general and then really from from your standpoint, >>yeah, so it's so so the go along one is really to lower the bar for folks to run state for workloads on on kubernetes we want you know, the the Longhorn is 100% open source and it's owned by CN cf now. So we in terms of features and functionalities is obviously a small subset of what a true enterprise grade solution like Port Works or, um, CEO on that that could provide. So there's just, you know, the storage role. Ah, future settle. The roadmap is very rich. I don't think it's not really Ranchers go Oh, our Longhorns goal to, you know, to try to turn itself into a into a plug in replacement for these enterprise, great storage or data management solutions. But But they're you know, there's some critical critical feature gaps that we need address. And that's what the team is gonna be focusing on, perhaps for the rest of the year. >>Yeah, uh, still, I would I would kind of, you know, echo what Chang said, right? I think folks make it started with solutions, like longer or even a plug in connector plug in with one of their existing storage vendors, whether it's pure netapp or or EMC from our viewpoint, that's wonderful, because that allows them to kind of graduate to where they're considering storage and data as part of the stack. They really should that's the way they're going to succeed by by looking at it as a whole and really with, You know, it's a great way to get started on a proof of concept architecture where your focus initially is very much on the orchestration and the container ization part. But But, as Xiang pointed out, you know what what rancher did, what I entered it for Kubernetes was build a simple, elegant, robust solution that kind of democratized communities. We're doing the same thing for communities storage right? What Port works does is have a solution that is simple, elegant, fully automated, scalable and robust. But more importantly, it's a complete data platform, right? We we go where all these solutions start, but don't kind of venture forward. We are a full, complete lifecycle management for data across that whole life cycle. So there's many many customers now are buying port works and then adding deal right up front, and then a few months later they might come back and I'd backup from ports. So two shanks point right because of the uniqueness of the kubernetes workload, because it is an app defined control plane, not machine to find what is happening is it's disrupting, Just like just like virtualization day. VM exist today because because they focused on a VM version off. You know, the their backup solution. So the same thing is happening. Kubernetes workloads are district causing disruption of the D r and backup and storage market with solutions like sports. >>Wonderful. Merlin Chang. Thank you so much for the updates. Absolutely. The promise of containers A Z you were saying? Really, is that that Atomic unit getting closer to the application really requires storage to be a full and useful solution. So great to see the progress that's being made. Thank you so much for joining us. >>Welcome, Shannon. We look forward to ah, working with you as you reach for the stars. Congratulations again. We look >>forward to the containing partnership morally and thank you. Still for the opportunity here. >>Absolutely great talking to both of you And stay tuned. Lots more coverage of the Cube Cube Con cloud, native con 2020 Europe. I'm stew minimum. And thank you for watching the Cube. Yeah, yeah, yeah, yeah, yeah, yeah
SUMMARY :
and cloud, native con Europe 2020 Virtual brought to you by Red Hat, I actually I'm going to start with you just cause you know we've seen, of the things that for decades I t has had to do and has done to Of course, storage is a piece of the overall puzzle that that ranchers trying to help Ah, a lot of good technologies available, you know, Anybody that knows the storage industry knows that you know standards in various ways And so I think, you know, the third type of you know, we call container Native Storage. I think back if you want. I love to hear more is view as well, especially about you know, And that's kind of the solution that we provide. the rest of this year and get some early for you to you know, to run state for workloads on on kubernetes we want you know, causing disruption of the D r and backup and storage market with solutions like sports. Thank you so much for the updates. We look forward to ah, working with you as you reach for the stars. Still for the opportunity here. Absolutely great talking to both of you And stay tuned.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Red Hat | ORGANIZATION | 0.99+ |
$10 | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Rancher Labs | ORGANIZATION | 0.99+ |
Shang Amerli | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Shannon | PERSON | 0.99+ |
uber | ORGANIZATION | 0.99+ |
Western Asset Management | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Both | QUANTITY | 0.99+ |
20 terabytes | QUANTITY | 0.99+ |
CN CF. | ORGANIZATION | 0.99+ |
20 notes | QUANTITY | 0.99+ |
Marie | PERSON | 0.99+ |
Morissette | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
T Mobile Comcast | ORGANIZATION | 0.99+ |
one issue | QUANTITY | 0.99+ |
Xiang Yang | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
8 | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
Sheng Liang | PERSON | 0.98+ |
second type | QUANTITY | 0.98+ |
C plus plus | TITLE | 0.98+ |
Chang | PERSON | 0.98+ |
KubeCon | EVENT | 0.98+ |
Xiang | PERSON | 0.98+ |
Sue Save | PERSON | 0.98+ |
15 years ago | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
longhorn | ORGANIZATION | 0.97+ |
Shang | PERSON | 0.97+ |
two layers | QUANTITY | 0.97+ |
earlier this year | DATE | 0.97+ |
Longhorn | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
Roche March X | ORGANIZATION | 0.97+ |
345 additional layers | QUANTITY | 0.97+ |
GMC | ORGANIZATION | 0.97+ |
16 nodes | QUANTITY | 0.96+ |
CN cf | ORGANIZATION | 0.96+ |
third type | QUANTITY | 0.96+ |
each one | QUANTITY | 0.96+ |
about 160 plus customers | QUANTITY | 0.95+ |
a few months later | DATE | 0.95+ |
both communities | QUANTITY | 0.94+ |
First | QUANTITY | 0.94+ |
over five years old | QUANTITY | 0.94+ |
CN CF | ORGANIZATION | 0.93+ |
EBS | ORGANIZATION | 0.93+ |
three types | QUANTITY | 0.93+ |
two | QUANTITY | 0.93+ |
600 thousands of notes | QUANTITY | 0.93+ |
Merlin Chang | PERSON | 0.93+ |
Sigs | ORGANIZATION | 0.92+ |
hundreds of containers | QUANTITY | 0.91+ |
One way | QUANTITY | 0.91+ |
The Cloud Native Computing Foundation | ORGANIZATION | 0.9+ |
this year | DATE | 0.89+ |
Coop | ORGANIZATION | 0.89+ |
Europe | LOCATION | 0.89+ |
Port Works | ORGANIZATION | 0.89+ |
CloudNativeCon Europe | EVENT | 0.88+ |
Cube | COMMERCIAL_ITEM | 0.87+ |
CSC | TITLE | 0.87+ |
A couple of years ago | DATE | 0.86+ |
Coop con | ORGANIZATION | 0.86+ |
Kubernetes | TITLE | 0.86+ |
Portworx | ORGANIZATION | 0.86+ |
six storage | QUANTITY | 0.85+ |
today | DATE | 0.84+ |
rancher | ORGANIZATION | 0.84+ |
Cube Con | COMMERCIAL_ITEM | 0.84+ |
Golan | TITLE | 0.83+ |
Port Works | ORGANIZATION | 0.82+ |
10 nodes | QUANTITY | 0.82+ |
Renaud Gaubert, NVIDIA & Diane Mueller, Red Hat | KubeCon + CloudNativeCon NA 2019
>>Live from San Diego, California It's the Q covering Koopa and Cloud Native Cot brought to you by Red Cloud, Native Computing Pounding and its ecosystem March. >>Welcome back to the Cube here at Q. Khan Club native Khan, 2019 in San Diego, California Instrumental in my co host is Jon Cryer and first of all, happy to welcome back to the program. Diane Mueller, who is the technical of the tech lead of cloud native technology. I'm sorry. I'm getting the wrong That's director of community development Red Hat, because renew. Goodbye is the technical lead of cognitive technologies at in video game to the end of day one. I've got three days. I gotta make sure >>you get a little more Red Bull in the conversation. >>All right, well, there's definitely a lot of energy. Most people we don't even need Red Bull here because we're a day one. But Diane, we're going to start a day zero. So, you know, you know, you've got a good group of community of geeks when they're like Oh, yeah, let me fly in a day early and do like 1/2 day or full day of deep dives. There So the Red Hat team decided to bring everybody on a boat, I guess. >>Yeah. So, um, open ships Commons gathering for this coup con we hosted at on the inspiration Hornblower. We had about 560 people on a boat. I promised them that it wouldn't leave the dock, but we deal still have a little bit of that weight going on every time one of the big military boats came by. And so people were like a little, you know, by the end of the day, but from 8 a.m. in the morning till 8 p.m. In the evening, we just gathered had some amazing deep dives. There was unbelievable conversations onstage offstage on we had, ah, wonderful conversation with some of the new Dev ops folks that have just come on board. That's a metaphor for navigation and Coop gone. And and for events, you know, Andrew Cliche for John Willis, the inevitable Crispin Ella, who runs Open Innovation Labs, and J Bloom have all just formed the global Transformation Office. I love that title on dhe. They're gonna be helping Thio preach the gospel of Cultural Dev ops and agile transformation from a red hat office From now going on, there was a wonderful conversation. I felt privileged to actually get to moderate it and then just amazing people coming forward and sharing their stories. It was a great session. Steve Dake, who's with IBM doing all the SDO stuff? Did you know I've never seen SDO done so well, Deployment explains so well and all of the contents gonna be recorded and up on Aaron. We streamed it live on Facebook. But I'm still, like reeling from the amount of information overload. And I think that's the nice thing about doing a day zero event is that it's a smaller group of people. So we had 600 people register, but I think was 560 something. People show up and we got that facial recognition so that now when they're traveling through the hallways here with 12,000 other people, that go Oh, you were in the room. I met you there. And that's really the whole purpose for comments. Events? >>Yeah, I tell you, this is definitely one of those shows that it doesn't take long where I say, Hey, my brain is full. Can I go home. Now. You know I love your first impressions of Q Khan. Did you get to go to the day zero event And, uh, what sort of things have you been seeing? So >>I've been mostly I went to the lightning talks, which were amazing. Anything? Definitely. There. A number of shout outs to the GPU one, of course. Uh, friend in video. But I definitely enjoyed, for example, of the amazing D. M s one, the one about operators. And generally all of them were very high quality. >>Is this your first Q? Khan, >>I've been there. I've been a year. This is my third con. I've been accused in Europe in the past. Send you an >>old hat old hand at this. Well, before we get into the operator framework and I wanna love to dig into this, I just wanted to ask one more thought. Thought about open shift, Commons, The Commons in general, the relationship between open shift, the the offering. And then Okay, the comments and okay, D and then maybe the announcement about about Okay. Dee da da i o >>s. Oh, a couple of things happened yesterday. Yesterday we dropped. Okay, D for the Alfa release. So anyone who wants to test that out and try it out it's an all operators based a deployment of open shift, which is what open ship for is. It's all a slightly new architectural deployment methodology based on the operator framework, and we've been working very diligently. Thio populate operator hub dot io, which is where all of the upstream projects that have operators like the one that Reynolds has created for in the videos GP use are being hosted so that anyone could deploy them, whether on open shift or any kubernetes so that that dropped. And yesterday we dropped um, and announced Open Sourcing Quay as project quay dot io. So there's a lot of Io is going on here, but project dia dot io is, um, it's a fulfillment, really, of a commitment by Red Hat that whenever we do an acquisition and the poor folks have been their acquired by Cora West's and Cora Weston acquired by Red Hat in an IBM there. And so in the interim, they've been diligently working away to make the code available as open source. And that hit last week and, um, to some really interesting and users that are coming up and now looking forward to having them to contribute to that project as well. But I think the operator framework really has been a big thing that we've been really hearing, getting a lot of uptake on. It's been the new pattern for deploying applications or service is on getting things beyond just a basic install of a service on open shift or any kubernetes. And that's really where one of the exciting things yesterday on we were talking, you know, and I were talking about this earlier was that Exxon Mobil sent a data scientist to the open ship Commons, Audrey Resnick, who gave this amazing presentation about Jupiter Hub, deeper notebooks, deploying them and how like open shift and the advent of operators for things like GP use is really helping them enable data scientists to do their work. Because a lot of the stuff that data signs it's do is almost disposable. They'll run an experiment. Maybe they don't get the result they want, and then it just goes away, which is perfect for a kubernetes workload. But there are other things you need, like a Jeep use and work that video has been doing to enable that on open shift has been just really very helpful. And it was It was a great talk, but we were talking about it from the first day. Signs don't want to know anything about what's under the hood. They just want to run their experiments. So, >>you know, let's like to understand how you got involved in the creation of the operator. >>So generally, if we take a step back and look a bit at what we're trying to do is with a I am l and generally like EJ infrastructure and five G. We're seeing a lot of people. They're trying to build and run applications. Whether it's in data Center at the and we're trying to do here with this operator is to bring GPS to enterprise communities. And this is what we're working with. Red Hat. And this is where, for example, things like the op Agrestic A helps us a lot. So what we've built is this video Gee, few operator that space on the upper air sdk where it wants us to multiple phases to in the first space, for example, install all the components that a data scientist were generally a GPU cluster of might want to need. Whether it's the NVIDIA driver, the container runtime, the community's device again feast do is as you go on and build an infrastructure. You want to be able to have the automation that is here and, more importantly, the update part. So being able to update your different components, face three is generally being able to have a life cycle. So as you manage multiple machines, these are going to get into different states. Some of them are gonna fail, being able to get from these bad states to good states. How do you recover from them? It's super helpful. And then last one is monitoring, which is being able to actually given sites dr users. So the upper here is decay has helped us a lot here, just laying out these different state slips. And in a way, it's done the same thing as what we're trying to do for our customers. The different data scientists, which is basically get out of our way and allow us to focus on core business value. So the operator, who basically takes care of things that are pretty cool as an engineer I lost due to your election. But it doesn't really help me to focus on like my core business value. How do I do with the updates, >>you know? Can I step back one second, maybe go up a level? The problem here is that each physical machine has only ah limited number of NVIDIA. GPU is there and you've got a bunch of containers that maybe spawning on different machines. And so they have to figure out, Do I have a GPU? Can I grab one? And if I'm using it, I assume I have to reserve it and other people can't use and then I have to give it up. Is that is that the problem we're solving here? So this is >>a problem that we've worked with communities community so that like the whole resource management, it's something that is integrated almost first class, citizen in communities, being able to advertise the number of deep, use their your cluster and used and then being able to actually run or schedule these containers. The interesting components that were also recently added are, for example, the monitoring being able to see that a specific Jupiter notebook is using this much of GP utilization. So these air supercool like features that have been coming in the past two years in communities and which red hat has been super helpful, at least in these discussions pushing these different features forward so that we see better enterprise support. Yeah, >>I think the thing with with operators and the operator lifecycle management part of it is really trying to get to Day two. So lots of different methodologies, whether it's danceable or python or job or or UH, that's helm or anything else that can get you an insult of a service or an application or something. And in Stan, she ate it. But and the operator and we support all of that with SD case to help people. But what we're trying to do is bridge the to this day to stuff So Thea, you know, to get people to auto pilot, you know, and there's a whole capacity maturity model that if you go to operator hab dot io, you can see different operators are a different stages of the game. So it's been it's been interesting to work with people to see Theo ah ha moment when they realize Oh, I could do this and then I can walk away. And then if that pod that cluster dies, it'll just you know, I love the word automatically, but they, you know, it's really the goal is to help alleviate the hands on part of Day two and get more automation into the service's and applications we deploy >>right and when they when they this is created. Of course it works well with open shift, but it also works for any kubernetes >>correct operator. HAB Daddio. Everything in there runs on any kubernetes, and that's really the goal is to be ableto take stuff in a hybrid cloud model. You want to be able to run it anywhere you want, so we want people to be unable to do it anywhere. >>So if this really should be an enabler for everything that it's Vinny has been doing to be fully cloud native, Yes, >>I think completely arable here is this is a new attack. Of course, this is a bit there's a lot of complexity, and this is where we're working towards is reducing the complexity and making true that people there. Dan did that a scientist air machine learning engineers are able to focus on their core business. >>You watch all of the different service is in the different things that the data scientists are using. They don't I really want to know what's under under the hood. They would like to just open up a Jupiter Hub notebook, have everything there. They need, train their models, have them run. And then after they're done, they're done and it goes away. And hopefully they remember to turn off the Jeep, use in the woods or wherever it is, and they don't keep getting billed for it. But that's the real beauty of it is that they don't have to worry so much anymore about that. And we've got a whole nice life cycle with source to image or us to I. And they could just quickly build on deploy its been, you know, it's near and dear to my heart, the machine learning the eyesight of stuff. It is one of the more interesting, you know, it's the catchy thing, but the work was, but people are really doing it today, and it's been we had 23 weeks ago in San Francisco, we had a whole open ship comments gathering just on a I and ML and you know, it was amazing to hear. I think that's the most redeeming thing or most rewarding thing rather for people who are working on Kubernetes is to have the folks who are doing workloads come and say, Wow, you know, this is what we're doing because we don't get to see that all the time. And it was pretty amazing. And it's been, you know, makes it all worthwhile. So >>Diane Renaud, thank you so much for the update. Congratulations on the launch of the operators and look forward to hearing more in the future. >>All right >>to >>be here >>for John Troy runs to minimum. More coverage here from Q. Khan Club native Khan, 2019. Thanks for watching. Thank you.
SUMMARY :
Koopa and Cloud Native Cot brought to you by Red Cloud, California Instrumental in my co host is Jon Cryer and first of all, happy to welcome back to the program. There So the Red Hat team decided to bring everybody on a boat, And that's really the whole purpose for comments. Did you get to go to the day zero event And, uh, what sort of things have you been seeing? But I definitely enjoyed, for example, of the amazing D. I've been accused in Europe in the past. The Commons in general, the relationship between open shift, And so in the interim, you know, let's like to understand how you got involved in the creation of the So the operator, who basically takes care of things that Is that is that the problem we're solving here? added are, for example, the monitoring being able to see that a specific Jupiter notebook is using this the operator and we support all of that with SD case to help people. Of course it works well with open shift, and that's really the goal is to be ableto take stuff in a hybrid lot of complexity, and this is where we're working towards is reducing the complexity and It is one of the more interesting, you know, it's the catchy thing, but the work was, Congratulations on the launch of the operators and look forward for John Troy runs to minimum.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Audrey Resnick | PERSON | 0.99+ |
Andrew Cliche | PERSON | 0.99+ |
Diane Mueller | PERSON | 0.99+ |
Steve Dake | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jon Cryer | PERSON | 0.99+ |
Exxon Mobil | ORGANIZATION | 0.99+ |
Diane Renaud | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
John Troy | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
1/2 day | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
J Bloom | PERSON | 0.99+ |
Diane | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Open Innovation Labs | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Red Cloud | ORGANIZATION | 0.99+ |
560 | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
600 people | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
John Willis | PERSON | 0.99+ |
8 a.m. | DATE | 0.99+ |
Crispin Ella | PERSON | 0.99+ |
Jeep | ORGANIZATION | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
Cora West | ORGANIZATION | 0.99+ |
Yesterday | DATE | 0.99+ |
last week | DATE | 0.99+ |
SDO | TITLE | 0.99+ |
Dan | PERSON | 0.99+ |
8 p.m. | DATE | 0.98+ |
23 weeks ago | DATE | 0.98+ |
first impressions | QUANTITY | 0.98+ |
one second | QUANTITY | 0.98+ |
Q. Khan Club | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Renau | PERSON | 0.98+ |
Red Bull | ORGANIZATION | 0.98+ |
Reynolds | PERSON | 0.97+ |
Aaron | PERSON | 0.97+ |
Day two | QUANTITY | 0.97+ |
March | DATE | 0.96+ |
third con. | QUANTITY | 0.96+ |
first space | QUANTITY | 0.96+ |
first day | QUANTITY | 0.95+ |
Vinny | PERSON | 0.95+ |
Cora Weston | ORGANIZATION | 0.94+ |
Thio | PERSON | 0.94+ |
Cloud | ORGANIZATION | 0.93+ |
ORGANIZATION | 0.92+ | |
first class | QUANTITY | 0.92+ |
today | DATE | 0.9+ |
about 560 people | QUANTITY | 0.9+ |
Jupiter | LOCATION | 0.89+ |
each physical machine | QUANTITY | 0.88+ |
12,000 other | QUANTITY | 0.88+ |
day zero | QUANTITY | 0.88+ |
D. M | PERSON | 0.87+ |
CloudNativeCon NA 2019 | EVENT | 0.87+ |
d Gaubert | PERSON | 0.87+ |
Thea | PERSON | 0.86+ |
python | TITLE | 0.84+ |
Native Computing Pounding | ORGANIZATION | 0.83+ |
a day | QUANTITY | 0.79+ |
day zero | EVENT | 0.78+ |
day one | QUANTITY | 0.78+ |
Koopa | ORGANIZATION | 0.76+ |
one more thought | QUANTITY | 0.74+ |
Khan | PERSON | 0.72+ |
Commons | ORGANIZATION | 0.72+ |
KubeCon + | EVENT | 0.72+ |
Jupiter Hub | ORGANIZATION | 0.71+ |
Eric Herzog, IBM Storage | VMworld 2019
>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum, World 2019 brought to you by the M Wear and its ecosystem partners. >> Welcome back to San Francisco. Day three of our coverage here on the Cube Of'em world 2019. I'm John Wall's Glad to have you here aboard for our continuing coverage here Day Volonte is also joining me, as is the sartorially resplendent Eric Herzog, cm of and vice president. Global storage channels that IBM storage. Eric, good to see you and love the shirt. Very >> nice. Thank you. Well, always have a wine shirts when I'm on the Cube >> I love in a long time Cuba to we might say, I'm sure he's got the record. Yeah, might pay. Well, >> you and pattern, neck and neck. We'll go to >> the vault. And well, >> since Pat used to be my boss, you know, couch out a path. >> Well, okay. Let the little show what IBM think. Maybe. Well, that's OK. Let's just start off a big picture. We're in all this, you know. Hybrid. Multilingual. This discussion went on this week. Obviously, just your thoughts about general trends and where the business is going now supposed to wear? Maybe we're 23 years ago. Well, the >> good thing is for IBM storage, and we actually came to your partner and titty wiki Bond when our new general manager, Ed Walsh, joined. And we came and we saw Dave and John at the old office are at your offices, and we did a pitch about hybrid multi cloud. Remember that gave us some feedback of how to create a new slide. So we created a slide based on Dave's input, and we were doing that two and 1/2 years ago. So we're running around telling the storage analyst Storage Press about hybrid multi cloud based on IBM storage. How weaken transparently move data, things we do with backup, Of course. An archive. You've got about 450 small and medium cloud providers. Their backup is a service engine. Is our spectrum protect? And so we talked about that. So Dave helped us craft the slide to make it better, because he said, we left a couple things >> out that Eric >> owes you. There were a few other analysts I'm sure you talked to and got input, but but us really were the first toe to combine those things in your in your marketing presentations. But >> let's I'd love to get >> an update on the business. Yes, help people understand the IBM storage organization. You guys created the storage business, you know, years and years and years ago. It's a it's a you know you've got your core business, which is column arms dealers. But there's a lot of Regent IBM, the Cloud Division. You've got the service's division, but so help us understand this sort of organizational structure. So >> the IBM story division's part of IBM Systems, which includes both the mainframe products Z and the Power Server entities. So it's a server in storage division. Um, the Easy guys in particular, have a lots of software that they sell and not just mainframe. So they have a very, very large software business, as do we. As you know, from looking at people that do the numbers, We're the second largest storage software company in the world, and the bulk of that software's not running on IBM gear. So, for example, spectrum protect will back up anyone's array spectrum scale and our IBM Cloud Object storage are sold this software only software defined as the spectrum virtualized. You could basically create a J. Bader Jabo after your favorite distributor or reseller and create your honor. Rates are software, but the all of the infrastructure would actually not be ours, not branded by us. And you call us for tech support for the software side. But if you had a bad power supplier fan, you'd have to call, you know, the reseller distributor said this very robust storage software business. Obviously you make sure that was compatible with the other server elements of IBM systems. But the bulk of our storage is actually sitting connect to some server that doesn't have an IBM logo on it. So that's the bulk of our business connected to Intel servers of all types that used to include, of course, IBM Intel Server division, which was sold off to Lenovo. So we still have a very robust business in the array space that has nothing to do with working on a power machine are working on a Z machine, although we clearly worked very heavily with them and have a number of things going with him, including something that's coming very shortly in the middle of September on some new high end products that we're going to dio >> went 90 Sea Counts All this stuff. Do they >> count to give IBM credit for all the storage that lives inside of the IBM Cloud? Do you get you get credit for that or >> not get credit for that? So when they count our number, it's only the systems that we sell and the storage software that we sell. So if you look at if we were a standalone company, which would include support service made everything, some of which we don't get credit for, right, the support and service is a different entity at IBM that does that, UM, the service's group, the tech support that all goes to someone else. We don't have a new credit >> so hypothetical I don't I don't think this is the case, but let's say hypothetically, if pure storage sold an array into IBM Cloud, they would get credit for it. But if you're array and I'm sure this happens is inside of the IBM, you don't get credit for it. >> That's true interesting, so it's somewhat undercounts. Part of that is the >> way we internally count because we're selling it to ourselves. >> But that's it. >> It's not. It's more of an accounting thing, but it's different when we sell the anybody else. So, for example, we sell the hundreds of cloud providers who in theory compete with the IBM Cloud Division >> to you Get credit for that. You get credit for your own away. That's way work. But if we were standing >> on coming for, say, government, we were Zog in store and I bought the company away, we would be about a $6.3 billion standalone storage software company. That's what we would be if we were all in because support service manes. If we were our own company with our own right legal entity, just like net app or the other guys, we'd be Stanley would be in that, you know, low $6 billion range, counting everything all in. When we do report publicly, we only report our storage system because we don't report our storage software business. And as you notice a few times, our CFO has made comments. If we did count, the storage software visit would be ex, and he's publicly stated that price at least two times. Since I've been an idea when he talks about the software on, but legally we only talk about IBM storage systems. When he publicly state our numbers out onto Wall Street, that's all >> we publicly report. So, um, you're like, you're like a walking sheet of knowledge here, but I wonder if you could take the audience through the portfolio. Oh, it's vast. How should we think about it? And the names have changed. You talk about, you know, 250 a raise, whatever it is the old sand volume control. And now it's a spectrum virtualized, >> right? So take us to the portfolio. What's the current? It's free straight for. >> We have really three elements in the portfolio, all built around, if you will, solution plays. But the three real elements in the portfolio our storage arrays, storage systems, we have entry mid range and high end, just like our competitors do. We lead with all flash, but we still sell hybrid and obviously, for backup, an archive. We still sell all hard drive right for those workloads. So and we have filed blocking object just like most other guys do, Um, for an array, then we have a business built around software, and we have two key elements. Their software defined storage, and we saw that software completely stand alone. It happens, too, by the way, be embedded on the arrays. So, for example, Dave, you mentioned Spectrum virtualized that ship's on flash systems and store wise. But if you don't want our raise, we will sell you just spectrum virtualized alone for block spectrum scale for Big Big Data A. I file Workloads and IBM caught object storage, which could all of them could be bought on an array. But they also could be bought. Itjust Standalone component. Yes, there's a software so part of the advantage we feel that delivers. It's some of the people that have software defined storage, that air raid guys. It's not the same software, so for us, it's easier for us to support and service. It's easier for a stack developing have leading it. Features is not running two different pieces of software running, one that happens to have a software on Lee version or an array embedded version. So we've got that, and then the third is around modern data protection, and that's really it. So a modern data protection portfolio built around spectrum, protect and Protect Plus and some other elements. A software to find storage where we sell the software only, and then arrays. That's it. It's really three things and not show. Now they're all kinds components underneath the hood. But what we really do is we sell. We don't really run around and talk about off last race. We talk about hybrid multi cloud. Now all of our flash raise and a lot of our software defined storage will automatically tear data out, too. Hybrid multi cloud configurations. We just So we lead with that same thing. We have one around cyber resiliency. Now, the one thing that spans the whole portfolio of cyber resiliency way have cyber rebellion see and a raise. We have some softer on the mainframe called Safeguarded Copy that creates immutable copies and has extra extra security for the management rights. You've got management control, and if you have a malware ransomware attack, you couldn't recover to these known good copies. So that's a piece of software that we sell on the mainframe on >> how much growth have you seen in that in? Because he's never reveals if you've got it resonating pervasive, right, Pervasive. So >> we've got, for example, malware and ransomware detection. Also, Inspector protect. So it's taken example. So I'm going to steal from the Cube and I'm gonna ask Dave and for you, I want a billion dollars and Dave's gonna laugh at me because he used a spectrum protect. He's gonna start laughing. But if I'm the ransomware guy, what do I do? I go after your snapshots, your replicas and your backup data sets. First, I make sure I've got those under control. And then when I tell you I'm holding you for ransom, you can't go back to a known good copy. So Ransomware goes after backup snaps and replicas first. Then it goes half your primary storage. So what we do, inspector protect, for example, is we know that at Weeki Bond and the Cube, you back up every night from 11 32 1 30 takes two hours to back you up every night. It's noon. There's tons of activity in the backup data sets. What the heck is going on? We send it out to the admin, So the admin for the Cube wicky bond takes a look and says, No server failure. So you can't be doing a lot of recovery because of a bad server. No storage failures. What the heck is going on? It could be a possible mount where ransomware attack. So that type of technology, we encrypt it, rest on all of our store to raise. We have both tape and tape and cloud air gapping. I'm gonna ask you about that. We've got both types of air gapped >> used to hate tape. Now he loves my love, right? No, I used to hate it, But now I love it because it's like the last resort, just in case. And you do air gapping when you do a WR gapping with customers, Do you kind of rotate the You know, it's like, uh, you know, the Yasser Arafat used to move every night. You sleep in a different place, right? You gonna rotate the >> weird analogy? You do >> some stuff. There's a whole strategy >> of how we outlined how you would do a tape air gap, you a cloud air gap. Of course you're replicating or snapping out to the cloud anyway, so they can't get to that. So if you have a failure, we haven't known good copy, depending on what time that is, right. And then you just recover. Cover back to that and even something simple. We have data rest, encryption. Okay. A lot of people don't use it or won't use it on storage because it's often software based, and so is permanent. Well, in our D s platform on the mainframe, we can encrypt with no performance hit on our flash system products we can encrypt with no performance it on our high end store. Wise, we have four models on the two high end stores models we could encrypt with no performance penalty. So why would you not encrypt all your debt? When there's a performance penalty, you have to sort of pick and choose. My God, I got to encrypt this valuable financial data, but, boy, I really wish it wasn't so slow with us. There is no performance it when you encrypt. So we have encryption at rest, encryption at flight malware and ran somewhere detection. We've got worm, which is important, obviously, doesn't mean I can't steal from wicked Bond Cube, but I certainly can't go change all your account numbers for all your vendors. For sake. of argument, right? So and there's obviously heavily regulated industries that still require worm technology, right? Immutable on the fine, by the way, you could always if it's wormed, you could encrypt it if you want to write. Because Worm just means it's immutable. It doesn't. It's not a different data type. It's just a mutable version of that data. >> So the cyber resiliency is interesting, and it leads me to another question I have around just are, indeed so A lot of companies in this industry do a lot of D developing next generation products. I think, you know, look a t m c when you were there, you know, this >> was a lot of there. Wasn't a ton, >> of course, are a lot of patents and stuff like that. IBM does corps are a lot of research and research facilities, brainiac scientists, I want if you could talk about how the storage division takes advantage of that, either specifically, is it relates to cyber resiliency. But generally, >> yes, so as you know, IBM has got, I think it's like 12 12 or 15 research on Lee sites that that's all they do, and everyone there is, in fact, my office had to be. Akiyama didn't labs, and there's two labs actually hear. The AMA didn't research lab and the Silicon Valley lab, which is very close about five miles away. Beautiful. Almost everything. There is research. There's a few product management guys I happen, Navid desk there every once. Well, see a sales guy or two. But essentially, they're all Richard with PhDs from the leading inverse now at Al Madden and many sites, all the divisions have their own research teams there. There's a heavy storage contingent at Al Midan as an example. Same thing in Zurich. So, for example, we just announced last week, as you know, stuff that will work with Quantum on the tape side. So you don't have to worry about because one of things, obviously, that people complain about quantum computing, whether it's us or anyone else, the quantum computing you can crack basically any encryption. Well, guess what? IBM research has developed tape that can be encrypted. So if using quantum computer, whether it be IBM or someone else's when you go with quantum computing, you can have secured data because the quantum computer can't actually cracked the encryption that we just put into that new tape that was done at IBM Research. How >> far away are we from From Quantum, actually being ableto be deployed and even minor use cases. >> Well, we've got available right now in ibm dot com for Betas. So we've got several 1000 people who have been accidents in it. And entities, we've been talking publicly in the 3 to 7 year timeframe for quantum computer crap out. Should it? Well, no, because if you do the right sort of security, you don't but the power. So if you're envisioning one of my favorite movies, I robot, right where she's doing her talking and that's that would really be quantum in all honesty. But at the same time, you know, the key thing IBM is all about ethics and all about how we do things, whether it be what we do with our diversity programs and hiring. And IBM is always, you know, at the forefront of doing and promoting ethical this and ethical. Then >> you do a customer data is huge. >> Yeah, and what we do with the customer data sets right, we do. GDP are, for example, all over the world were not required by law to do it really Only in Europe we do it everywhere. And so if you're not, if you're in California, if you happen to be in Zimbabwe or you're in Brazil, you get the same protection of GDP are even though we're not legally required to do it. And why are we doing that? Because they're always concerned about customers data, and we know they're paranoid about it. We want to make sure people feel comfortable with IBM. We do. Quantum computing will end up in that same vein. >> But you know, I don't worry about you guys. I were about the guys on the other side of the fence, the ones that I worry about, the same thing Capabilities knew that was >> on, of course. And you know, he talked about it in his speech, and he talked about action on the Cube yesterday about some of his comments on the point, and he mentioned that was based on Blockchain. What he said was Blockchain is a great technology. They've got Blockchain is no. IBM is a big believer in Blockchain. We promoted all over the place and in fact we've done all kinds of different Blockchain things we just did. One announced it last week with Australia with the Australian. I think it is with their equivalent of Wall Street. We've done some stuff with Merrick, the big shipping container thing, and it's a big consortium. That's all legal stuff that was really talking about someone using it the wrong way. And he's very specific point out that Blockchain is a great technology if used ethically, and IBM is all about how we do it. So we make sure whether be quantum computing, Blockchain, et cetera, that everything we do at IBM is about helping the end users, making sure that we're making, for example, open source. As you know. Well, the number one provider of open source technology pre read had acquisition is IBM. We submit Maur into the open community. Renounce Now are we able to make some money off of that? Sure we are, but we do it for a reason, because IBM believes as day point out in this core research. Open computing is court research, and we just join the Open Foundation last week as well. So we're really big on making sure that what we do ourselves is Ethel now We try to make sure that what happens in the hands of people who buy our technology, which we can always track, is also done ethically. And we go out of our way to join the right industry. Associations work with governments, work with whatever we need to do to help make sure that technology could really be iRobot. Anyone who thinks that's not true. If you talk to your grandparent's goto, go to the moon. What are you talking about? >> What Star Trek. It's always >> come to me. Oh, yeah, >> I mean, if you're your iPhone is basically the old community. Transport is the only thing I wish I could have the transfer. Aziz. You know, >> David has the same frame us up. I'm afraid of flying, and I I felt like two million miles on United and David. He's laughs about flowers, so I'm waiting for the transport. I know that's why anymore there's a cone over here. Go stand. Or maybe maybe with a little bit of like, I'm selling my Bitcoin. No, hang on, just hold on. There's always a comeback. Not always. There could be a comeback because Derek always enjoy it as always. Thanks for the good seeing you. All right, Back with more Veum. World 2019 The Cube live in San Francisco.
SUMMARY :
brought to you by the M Wear and its ecosystem partners. Eric, good to see you and love the shirt. Well, always have a wine shirts when I'm on the Cube I love in a long time Cuba to we might say, I'm sure he's got the record. you and pattern, neck and neck. the vault. Well, the So we created a slide based on Dave's input, and we were doing that two There were a few other analysts I'm sure you talked to and got input, but but us really were the first You guys created the storage business, you know, years and years and years ago. So that's the bulk of our business connected to Intel servers of all types that used to include, Do they So if you look at if we were a standalone company, which would include support service But if you're array and I'm sure this happens is inside of the IBM, you don't get credit for it. Part of that is the So, for example, we sell the hundreds of cloud providers who in theory compete with the IBM Cloud Division to you Get credit for that. the other guys, we'd be Stanley would be in that, you know, low $6 billion range, counting everything all in. And the names have changed. What's the current? So and we have filed blocking object just like most other guys do, Um, how much growth have you seen in that in? is we know that at Weeki Bond and the Cube, you back up every night from 11 32 the You know, it's like, uh, you know, the Yasser Arafat used to move There's a whole strategy of how we outlined how you would do a tape air gap, you a cloud air gap. So the cyber resiliency is interesting, and it leads me to another question I have around just are, Wasn't a ton, research and research facilities, brainiac scientists, I want if you could talk about we just announced last week, as you know, stuff that will work with Quantum on far away are we from From Quantum, actually being ableto be deployed and even minor But at the same time, you know, the key thing IBM is all about ethics and all about how we by law to do it really Only in Europe we do it everywhere. But you know, I don't worry about you guys. And you know, he talked about it in his speech, and he talked about action on the Cube yesterday about come to me. Transport is the only thing I wish I could have the transfer. Thanks for the good seeing you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Zimbabwe | LOCATION | 0.99+ |
Zurich | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Brazil | LOCATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
two labs | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Eric | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Derek | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
two million miles | QUANTITY | 0.99+ |
AMA | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
3 | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
two hours | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
second | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
John Wall | PERSON | 0.99+ |
Star Trek | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Merrick | ORGANIZATION | 0.99+ |
IBM Research | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
IBM research | ORGANIZATION | 0.98+ |
third | QUANTITY | 0.98+ |
Yasser Arafat | PERSON | 0.98+ |
two | DATE | 0.98+ |
23 years ago | DATE | 0.98+ |
$6 billion | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.98+ |
Akiyama | PERSON | 0.98+ |
Al Madden | ORGANIZATION | 0.98+ |
Open Foundation | ORGANIZATION | 0.98+ |
Richard | PERSON | 0.98+ |
7 year | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Weeki Bond | ORGANIZATION | 0.97+ |
250 | QUANTITY | 0.97+ |
Aziz | PERSON | 0.97+ |
this week | DATE | 0.97+ |
Wall Street | LOCATION | 0.96+ |
first toe | QUANTITY | 0.96+ |
$6.3 billion | QUANTITY | 0.96+ |
two high end stores | QUANTITY | 0.96+ |
four models | QUANTITY | 0.96+ |
about five miles | QUANTITY | 0.96+ |
two different pieces | QUANTITY | 0.96+ |
Ethel | ORGANIZATION | 0.95+ |
Day three | QUANTITY | 0.95+ |
Pat | PERSON | 0.95+ |
both types | QUANTITY | 0.95+ |
Tom Barton, Diamanti | CUBEConversations, August 2019
>> from our studios in the heart of Silicon Valley, Palo Alto, California It is a cute conversation. >> Welcome to this Cube conversation here in Palo Alto, California. At the Cube Studios. I'm John for a host of the Cube. We're here for a company profile coming called De Monte. Here. Tom Barton, CEO. As V M World approaches a lot of stuff is going to be talked about kubernetes applications. Micro Service's will be the top conversation, Certainly in the underlying infrastructure to power that Tom Barton is the CEO of De Monte, which is in that business. Tom, we've known each other for a few years. You've done a lot of great successful ventures. Thehe Monty's new one. Your got on your plate here right now? >> Yes, sir. And I'm happy to be here, so I've been with the Amante GIs for about a year or so. Um, I found out about the company through a head turner. Andi, I have to admit I had not heard of the company before. Um, but I was a huge believer in containers and kubernetes. So has already sold on that. And so I had a friend of mine. His name is Brian Walden. He had done some massive kubernetes cloud based deployments for us at Planet Labs, a company that I was out for a little over three years. So I had him do technical due diligence. Brian was also the number three guy, a core OS, um, and so deeply steeped in all of the core technologies around kubernetes, including things like that CD and other elements of the technology. So he looked at it, came back and gave me two thumbs up. Um, he liked it so much that I then hired him. So he is now our VP of product management. And the the cool thing about the Amanti is essentially were a purpose built solution for running container based workloads in kubernetes on premises and then hooking that in with the cloud. So we believe that's very much gonna be a hybrid cloud world where for the major corporations that we serve Fortune 500 companies like banks like energy and utilities and so forth Ah, lot of their workload will maintain and be maintained on premises. They still want to be cloud compatible. So you need a purpose built platform to sort of manage both environments >> Yeah, we certainly you guys have compelling on radar, but I was really curious to see when you came in and took over at the helm of the CEO. Because your entrepreneurial career really has been unique. You're unique. Executive. Both lost their lands. And as an operator you have an open source and software background. And also you have to come very successful companies and exits there as well as in the hardware side with trackable you took. That company went public. So you got me. It's a unique and open source software, open source and large hardware. Large data center departments at scale, which is essentially the hybrid cloud market right now. So you kind of got the unique. You have seen the view from all the different sides, and I think now more than ever, with Public Cloud certainly being validated. Everyone knows Amazon of your greenfield. You started the cloud, but the reality is hybrid. Cloud is the operating model of the genesis. Next generation of companies drive for the next 20 to 30 years, and this is the biggest conversation. The most important story in tech. You're in the middle of it with a hot start up with a name that probably no one's ever heard of, >> right? We hope to change that. >> Wassily. Why did you join this company? What got your attention? What was the key thing once you dug in there? What was the secret sauce was what Got your attention? Yes. So to >> me again, the market environment. I'm a huge believer that if you look at the history of the last 15 years, we went from an environment that was 0% virtualized too. 95% virtualized with, you know, Vienna based technologies from VM Wear and others. I think that fundamentally, containers in kubernetes are equally as important. They're going to be equally as transformative going forward and how people manage their workloads both on premises and in the clouds. Right? And the fact that all three public cloud providers have anointed kubernetes as the way of the future and the doctor image format and run time as the wave of the future means, you know, good things were gonna happen there. What I thought was unique about the company was for the first time, you know, surprisingly, none of the exit is sick. Senders, um, in companies like Nutanix that have hyper converse solutions. They really didn't have anything that was purpose built for native container support. And so the founders all came from Cisco UCS. They had a lot of familiarity with the underpinnings of hyper converged architectures in the X 86 server landscape and networking, subsistence and storage subsystems. But they wanted to build it using the latest technologies, things like envy and me based Flash. Um, and they wanted to do it with a software stack that was native containers in Kubernetes. And today we support two flavors of that one that's fully open source around upstream kubernetes in another that supports our partner Red hat with open shift. >> I think you're really onto something pretty big here because one of things that day Volonte and Mine's too many men and our team had been looking at is we're calling a cloud to point over the lack of a better word kind of riff on the Web to point out concept. But cloud one daughter was Amazon. Okay, Dev ops agile, Great. Check the box. They move on with life. It's always a great resource, is never gonna stop. But cloud 2.0, is about networking. It's about securities but data. And if you look at all the innovation startups, we'll have one characteristic. They're all playing in this hyper converged hardware meat software stack with data and agility, kind of to make the original Dev ops monocle better. The one daughter which was storage and compute, which were virtualization planes. So So you're seeing that pattern and it's wide ranging at security is data everything else So So that's kind of what we call the Cloud two point game. So if you look at V m World, you look at what's going on the conversations around micro service red. It's an application centric conversation in an infrastructure show. So do you see that same vision? And if so, how do you guys see you enabling the customer at this saying, Hey, you know what? I have all this legacy. I got full scale data centers. I need to go full scale cloud and I need zero and disruption to my developer. Yeah, so >> this is the beauty of containers and kubernetes, which is they know it'll run on the premises they know will run in the cloud, right? Um and it's it is all about micro service is so whether they're trying to adopt them on our database, something like manga TB or Maria de B or Crunchy Post Grey's, whether it's on the operational side to enable sort of more frequent and incremental change, or whether it's on a developer side to take advantage of new ways of developing and delivering APS with C I. C. D. Tools and so forth. It's pretty much what people want to do because it's future proofing your software development effort, right? So there's sort of two streams of demand. One is re factoring legacy applications that are insufficiently kind of granule, arised on, behave and fail in a monolithic way. Um, as well as trying to adopt modern, modern, cloud based native, you know, solutions for things like databases, right? And so that the good news is that customers don't have to re factor everything. There are logical break points in their applications stack where they can say, Okay, maybe I don't have the time and energy and resource is too totally re factor a legacy consumer banking application. But at least I can re factor the data based here and serve up you know container in Kubernetes based service is, as Micro Service's database is, a service to be consumed by. >> They don't need to show the old to bring in the new right. It's used containers in our orchestration, Layla Kubernetes, and still be positioned for whether it's service measures or other things. Floor That piece of the shirt and everything else could run, as is >> right, and there are multiple deployments scenarios. Four containers. You can run containers, bare metal. Most of our customers choose to do that. You can also run containers on top of virtual machines, and you can actually run virtual machines on top of containers. So one of our major media customers actually run Splunk on top of K B M on top of containers. So there's a lot of different deployment scenarios. And really, a lot of the genius of our architecture was to make it easy for people that are coming from traditional virtualized environments to remap system. Resource is from the bm toe to a container at a native level or through Vienna. >> You mentioned the history lesson there around virtualization. How 15 years ago there was no virtualization now, but everything's virtualized we agree with you that containers and compares what is gonna change that game for the next 15 years? But what's it about VM? Where would made them successful was they could add virtualization without requiring code modification, right? And they did it kind of under the covers. And that's a concern Customs have. I have developers out there. They're building stacks. The building code. I got preexisting legacy. They don't really want to change their code, right? Do you guys fit into that narrative? >> We d'oh, right, So every customer makes their own choice about something like that. At the end of the day, I mentioned Splunk. So at the time that we supported this media customer on Splunk, Splunk had not yet provided a container based version for their application. Now they do have that, but at the time they supported K B M, but not native containers and so unmodified Splunk unmodified application. We took them from a batch job that ran for 23 hours down the one hour based on accelerating and on our perfect converged appliance and running unmodified code on unmodified K B m on our gear. Right, So some customers will choose to do that. But there are also other customers, particularly at scale for transaction the intensive applications like databases and messaging and analytics, where they say, You know, we could we could preserve our legacy virtualized infrastructure. But let's try it as a pair a metal container approach. And they they discovered that there's actually some savings from both a business standpoint and a technology tax standpoint or an overhead standpoint. And so, as I mentioned most of our customers, actually really. Deficiencies >> in the match is a great example sticking to the product technology differentiate. What's the big secret sauce describe the product? Why are you winning in accounts? What's the lift in your business right now? You guys were getting some traction from what I'm hearing. Yeah, >> sure. So look at the at the highest level of value Proposition is simplicity. There is no other purpose built, you know, complete hardware software stack that delivers coup bernetti coproduction kubernetes environment up and running in 15 minutes. Right. The X 86 server guys don't really have it. Nutanix doesn't really have it. The software companies that are active in this space don't really have it. So everything that you need that? The hardware platform, the storage infrastructure, the actual distribution of the operating system sent the West, for example. We distribute we actually distributed kubernetes distribution upstream and unmodified. And then, very importantly, in the combinations landscape, you have to have a storage subsystem in a networking subsystem using something called C s I container storage interface in C N I. Container networking interface. So we've got that full stack solution. No one else has that. The second thing is the performance. So we do a certain amount of hardware offload. Um, and I would say, Amazons purchase of Annapurna so Amazon about a company called Annapurna its basis of their nitro technology and its little known. But the reality is more than 50% of all new instances at E. C to our hardware assisted with the technology that they thought were offloaded. Yeah, exactly. So we actually offload storage and network processing via to P C I. D cards that can go into any industry server. Right? So today we ship on until whites, >> your hyper converge containers >> were African verge containers. Yeah, exactly. >> So you're selling a box. We sell a box with software that's the >> with software. But increasingly, our customers are asking us to unbundle it. So not dissimilar from the sort of journey that Nutanix went through. If a customer wants to buy and l will support Del customer wants to buy a Lenovo will support Lenovo and we'll just sell >> it. Or have you unbundled? Yetta, you're on bundling. >> We are actively taking orders for on bundling at the present time in this quarter, we have validated Del and Lenovo as alternate platforms, toothy intel >> and subscription revenue. On that, we >> do not yet. But that's the golden mask >> Titanic struggle with. So, yeah, and then they had to take their medicine. >> They did. But, you know, they had to do that as a public company. We're still a private company, so we can do that outside the limelight of the public >> markets. So, um, I'm expecting that you guys gonna get pretty much, um I won't say picked off, but certainly I think your doors are gonna be knocked on by the big guys. Certainly. Delic Deli and see, for instance, I think it's dirty. And you said yes. You're doing business with del name. See, >> um, we are doing as a channel partner and as an OM partner with them at the present time there, I wouldn't call them a customer. >> How do you look at V M were actually there in the V M, where business impact Gelsinger's on the record. It'll be on the Cube, he said. You know Cu Bernays the dial tone of the Internet, they're investing their doubling down on it. They bought Hep D O for half a billion dollars. They're big and cloud native. We expect to see a V M World tons of cloud Native conversation. Yes, good, bad for you. What's the take? The way >> legitimizes what we're doing right? And so obviously, VM, where is a large and successful company? That kind of, you know, legacy and presence in the data center isn't gonna go anywhere overnight. There's a huge set of tooling an infrastructure that bm where has developed in offers to their customers. But that said, I think they've recognized in their acquisition of Hep Theo is is indicative of the fact that they know that the world's moving this way. I think that at the end of the day, it's gonna be up to the customer right. The customer is going to say, Do I want to run containers inside? Of'em? Do I want to run on bare metal? Um, but importantly, I think because of, you know, the impact of the cloud providers in particular. If you think of the lingua franca of cloud Native, it's gonna be around Dr Image format. It's gonna be around kubernetes. It's not necessarily gonna be around V M, d K and BMX and E s X right. So these are all very good technologies, but I think increasingly, you know, the open standard and open source community >> people kubernetes on switches directly is no. No need, Right. Have anything else there? So I gotta ask you on the customer equation. You mentioned you, you get so you're taking orders. How you guys doing business today? Where you guys winning, given example of of why people while you're winning And then for anyone watching, how would they know if they should be a customer of yours? What's is there like? Is there any smoke signs and signals? Inside the enterprise? They mentioned batch to one hour. That's just music. Just a lot of financial service is used, for instance, you know they have timetables, and whether they're pulling back ups back are doing all the kinds of things. Timing's critical. What's the profile customer? Why would someone call you? What's the situation? The >> profile is heavy duty production requirements to run in both the developer context and an operating contact container in kubernetes based workloads on premises. They're compatible with the cloud right so increasingly are controlled. Plane makes it easy to manage workloads not just on premises but also back and forth to the public cloud. So I would argue that essentially all Fortune 500 companies Global 1000 companies are all wrestling with what's the right way to implement industry standard X 86 based hardware on site that supports containers and kubernetes in his cloud compatible Right? So that that is the number one question then, >> so I can buy a box and or software put it on my data center. Yes, and then have that operate with Amazon? Absolutely. Or Google, >> which is the beauty of the kubernetes standards, right? As long as you are kubernetes certified, which we are, you can develop and run any workload on our gear on the cloud on anyone else that's carbonated certified, etcetera. So you know that there isn't >> given example the workload that would be indicative. >> So Well, I'll cite one customer, Right. So, um, the reason that I feel confident actually saying the name is that they actually sort of went public with us at the recent Gardner conference a week or so ago when the customer is Duke Energy. So very typical trajectory of journey for a customer like this, which is? A couple years ago, they decided that they wanted re factor some legacy applications to make them more resilient to things like hurricanes and weather events and spikes in demand that are associated with that. And so they said, What's the right thing to do? And immediately they pick containers and kubernetes. And then he went out and they looked at five different vendors, and we were the only vendor that got their POC up and running in the required time frame and hit all five use case scenarios that they wanted to do right. So they ended up a re factoring core applications for how they manage power outages using containers and kubernetes, >> a real production were real. Production were developing standout, absolutely in a sandbox, pushing into production, working Absolutely. So you sounds like you guys were positioned to handle any workload. >> We can handle any workload, but I would say that where we shine is things that transaction the intensive because we have the hardware assist in the I o off load for the storage and the networking. You know, the most demanding applications, things like databases, things like analytics, things like messaging, Kafka and so forth are where we're really gonna >> large flow data, absolutely transactional data. >> We have customers that are doing simpler things like C I. C D. Which at the end of the day involves compiling things right and in managing code bases. But so we certainly have customers in less performance intensive applications, but where nobody can really touch us in morning. What I mean is literally sort of 10 to 30 times faster than something that Nutanix could do, for example, is just So >> you're saying you're 30 times faster Nutanix >> absolutely in trans actually intensive applications >> just when you sell a prescription not to dig into this small little bit. But does the customer get the hardware assist on that as well >> it is. To date, we've always bundled everything together. So the customers have automatically got in the heart >> of the finest on the hard on box. Yes. If I buy the software, I got a loaded on a machine. That's right. But that machine Give me the hardware. >> You will not unless you have R two p C I. D. Cards. Right? And so this is how you know we're just in the very early stages of negotiating with companies like Dell to make it easy for them to integrate her to P. C. I. D cards into their server platform. >> So the preferred flagship is the is the device. It's a think if they want the hardware sit, that they still need to software meeting at that intensive. It's right. If they don't need to have 30 times faster than Nutanix, they can just get the software >> right, right. And that will involve RCS. I plug in RCN I plug in our OS distribution are kubernetes distribution, and the control plane that manages kubernetes clusters >> has been great to get the feature on new company, um, give a quick plug for the company. What's your objectives? Were you trying to do. I'll see. Probably hiring. Get some financing, Any news, Any kind of Yeah, we share >> will be. And we will be announcing some news about financing. I'm not prepared to announce that today, but we're in very good shape with respected being funded for our growth. Um, and consequently, so we're now in growth mode. So today we're 55 people. I want to double back over the course of the next 4/4 and increasingly just sort of build out our sales force. Right? We didn't have a big enough sales force in North America. We've gotta establish a beachhead in India. We do have one large commercial banking customer in Europe right now. Um, we also have a large automotive manufacturer in a pack. But, um, you know, the total sales and marketing reach has been too low. And so a huge focus of what I'm doing now is building out our go to market model and, um, sort of 10 Xing the >> standing up, a lot of field going, going to market. How about on the biz, Dev side? I might imagine that you mentioned delicate. Imagine that there's a a large appetite for the hardware offload >> absolution? Absolutely. So something is. Deb boils down to striking partnerships with the cloud providers really on two fronts, both with respect the hardware offload and assist, but also supporting their on premises strategy. So Google, for example, is announced. Antos. This is their approach to supporting, you know, on premises, kubernetes workloads and how they interact with cool cloud. Right. As you can imagine, Microsoft and Amazon also have on premises aspirations and strategies, and we want to support those as well. This goes well beyond something like Amazon Outpost, which is really a narrow use case in point solution for certain markets. So cloud provider partnerships are very important. Exit E six server vendor partnership. They're very important. And then major, I s V. So we've announced some things with red hat. We were at the Red Hat Open summit in Boston a few months ago and announced our open ship project and product. Um, that is now G a. Also working with eyes, he's like Maria de be Mondo di B Splunk and others to >> the solid texting product team. You guys are solid. You feel good on the product. I feel very good about the product. What aboutthe skeptics are out there? Just to put the hard question to use? Man, it's crowded field. How do you gonna compete? What do you chances? How do you like your chances known? That's a very crowded field. You're going to rely on your fastballs, they say. And on the speed, what's the what's What's your thinking? Well, it's unique. >> And so part of the way or approve point that I would cite There is the channel, right? So when you go to the channel and channel is afraid that you're gonna piss off Del or E M. C or Net app or Nutanix or somebody you know, then they're not gonna promote you. But our channel partners air promoting us and talking about companies like Life Boat at the distribution level. Talking about companies like CD W S H. I, um, you know, W W t these these major North American distributors and resellers have basically said, Look, we have to put you in our line car because you're unique. There is no other purpose built >> and why that, like they get more service is around that they wrap service's around it. >> They want to kill the murder where they want to. Wrap service's around it, absolutely, and they want to do migrations from legacy environments towards Micro Service's etcetera. >> Great to have you on share the company update. Just don't get personal. If you don't mind personal perspective. You've been on the hardware side. You've seen the large scale data centers from racquetball and that experience you'll spit on the software side. Open source. What's your take on the industry right now? Because you're seeing, um, I talked a lot of sea cells around the security space and, you know, they all say, Oh, multi clouds a bunch of B s because I'm not going to split my development team between four clouds. I need to have my people building software stacks for my AP eyes, and then I go to the vendors. They support my AP eyes where you can't be a supplier. Now that's on the sea suicide. But the big mega trend is there's software stacks being built inside the premise of the enterprise. Yes, that not mean they had developers before building. You know, Kobol, lapse in the old days, mainframes to client server wraps. But now you're seeing a Renaissance of developers building a stack for the domain specific applications that they need. I think that requires that they have to run on premise hyper scale like environment. What's your take on it >> might take is it's absolutely right. There is more software based innovation going on, so customers are deciding to write their own software in areas where they could differentiate right. They're not gonna do it in areas that they could get commodities solutions from a sass standpoint or from other kinds of on Prem standpoint. But increasingly they are doing software development, but they're all 99% of the time now. They're choosing doctor and containers and kubernetes as the way in which they're going to do that, because it will run either on Prem or in the Cloud. I do think that multi cloud management or a multi multi cloud is not a reality. Are our primary modality that we see our customers chooses tons of on premises? Resource is, that's gonna continue for the foreseeable future one preferred cloud provider, because it's simply too difficult to to do more than one. But at the same time they want an environment that will not allow themselves to be locked into that cloud bender. Right? So they want a potentially experiment with the second public cloud provider, or just make sure that they adhere to standards like kubernetes that are universally shared so that they can't be held hostage. But in practice, people don't. >> Or if they do have a militant side, it might be applications. Like if you're running office 3 65 right, That's Microsoft. It >> could be Yes, exactly. On one >> particular domain specific cloud, but not core cloud. Have a backup use kubernetes as the bridge. Right that you see that. Do you see that? I mean, I would agree with by the way we agreed to you on that. But the question we always ask is, we think you Bernays is gonna be that interoperability layer the way T c p I. P was with an I p Networks where you had this interoperability model. We think that there will be a future state of some point us where I could connect to Google and use that Microsoft and use Amazon. That's right together, but not >> this right. And so nobody's really doing that today, But I believe and we believe that there is, ah, a future world where a vendor neutral vendor, neutral with respect to public cloud providers, can can offer a hybrid cloud control plane that manages and brokers workloads for both production, as well as data protection and disaster recovery across any arbitrary cloud vendor that you want to use. Um, and so it's got to be an independent third party. So you know you're never going to trust Amazon to broker a workload to Google. You're never going to trust Google to broker a workload of Microsoft. So it's not gonna be one of the big three. And if you look at who could it be? It could be VM where pivotal. Now it's getting interesting. Appertaining. Cisco's got an interesting opportunity. Red hats got an interesting opportunity, but there is actually, you know, it's less than the number of companies could be counted on one hand that have the technical capability to develop hybrid cloud abstraction that that spans both on premises and all three. And >> it's super early. Had to peg the inning on this one first inning, obviously first inning really early. >> Yeah, we like our odds, though, because the disruption, the fundamental disruption here is containers and kubernetes and the interest that they're generating and the desire on the part of customers to go to micro service is so a ton of application re factoring in a ton of cloud native application development is going on. And so, you know, with that kind of disruption, you could say >> you're targeting opening application re factoring that needs to run on a cloud operating >> model on premise in public. That's correct. In a sense, dont really brings the cloud to theon premises environment, right? So, for example, we're the only company that has the concept of on premises availability zones. We have synchronous replication where you can have multiple clusters that air synchronously replicated. So if one fails the other one, you have no service disruption or loss of data, even for a state full application, right? So it's cloud like service is that we're bringing on Prem and then providing the links, you know, for both d. R and D P and production workloads to the public Cloud >> block locked Unpack with you guys. You might want to keep track of humaneness. Stateville date. It's a whole nother topic, as stateless data is easy to manage with AP Eyes and Service's wouldn't GET state. That's when it gets interesting. Com Part in the CEO. The new chief executive officer. Demonte Day How long you guys been around before you took over? >> About five years. Four years before me about been on board about a year. >> I'm looking forward to tracking your progress. We'll see ya next week and seven of'em Real Tom Barton, Sea of de Amante Here inside the Cube Hot startup. I'm John Ferrier. >> Thanks for watching.
SUMMARY :
from our studios in the heart of Silicon Valley, Palo Alto, power that Tom Barton is the CEO of De Monte, which is in that business. And the the cool thing about the Amanti is essentially Next generation of companies drive for the next 20 to 30 years, and this is the biggest conversation. We hope to change that. What was the key thing once you dug I'm a huge believer that if you look at the history of the last 15 years, So if you look at V m World, But at least I can re factor the data based here and serve up you know Floor That piece of the shirt and everything else could run, as is And really, a lot of the genius of our architecture was to make it easy now, but everything's virtualized we agree with you that containers and compares what is gonna So at the time that we supported this media customer on Splunk, in the match is a great example sticking to the product technology differentiate. So everything that you need Yeah, exactly. So you're selling a box. from the sort of journey that Nutanix went through. it. Or have you unbundled? On that, we But that's the golden mask So, yeah, and then they had to take their medicine. But, you know, they had to do that as a public company. And you said yes. um, we are doing as a channel partner and as an OM partner with them at the present time there, How do you look at V M were actually there in the V M, where business impact Gelsinger's on the record. Um, but importantly, I think because of, you know, the impact of the cloud providers in particular. So I gotta ask you on the customer equation. So that that is the number one question Yes, and then have that operate with Amazon? So you know that there isn't saying the name is that they actually sort of went public with us at the recent Gardner conference a So you sounds like you guys were positioned to handle any workload. the most demanding applications, things like databases, things like analytics, We have customers that are doing simpler things like C I. C D. Which at the end of the day involves compiling But does the customer get the hardware assist So the customers have automatically got in the heart But that machine Give me the hardware. And so this is how you know we're just in the very early So the preferred flagship is the is the device. are kubernetes distribution, and the control plane that manages kubernetes clusters give a quick plug for the company. But, um, you know, the total sales and marketing reach has been too low. I might imagine that you mentioned delicate. This is their approach to supporting, you know, on premises, kubernetes workloads And on the speed, what's the what's What's your thinking? And so part of the way or approve point that I would cite There is the channel, right? They want to kill the murder where they want to. Great to have you on share the company update. But at the same time they want an environment that will not allow themselves to be locked into that cloud Or if they do have a militant side, it might be applications. On one But the question we always ask is, we think you Bernays is gonna be that interoperability layer the of companies could be counted on one hand that have the technical capability to develop hybrid Had to peg the inning on this one first inning, obviously first inning really And so, you know, with that kind of disruption, So if one fails the other one, you have no service disruption or loss of data, block locked Unpack with you guys. Four years before me about been on board about a year. Sea of de Amante Here inside the Cube Hot startup.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Greene | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Jeff Hammerbacher | PERSON | 0.99+ |
Diane | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mark Albertson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Colin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Tricia Wang | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Singapore | LOCATION | 0.99+ |
James Scott | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Ray Wang | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Brian Walden | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Rachel Tobik | PERSON | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
Zeynep Tufekci | PERSON | 0.99+ |
Tricia | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Tom Barton | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Sandra Rivera | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Jennifer Lin | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Brian | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Radisys | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Amanda Silver | PERSON | 0.99+ |
Karen Quintos, Dell Technologies | Dell Technologies World 2019
>> Live from Las Vegas, it's theCUBE covering Dell Technology's World 2019. Brought to you by Dell Technologies and it's ecosystem partners. >> Hi, welcome to theCUBE Lisa Martin with Stu Miniman and we are live at Dell Technologies World 2019 in Las Vegas with about 15,000 or so other people. There's about 4,000 of the Dell Technologies community of partners here as well. Day one as I mentioned, we're very pleased to welcome back one of our cube alumni, Karen Quintos, EVP and Chief Customer Officer from Dell Technologies, Karen, welcome back to theCUBE. >> Thank you, thank you. Always great to be with you all. >> So one of the things you walk down on stage this morning with Michael Dell and and the whole gang and you started to share a story that I'd love for you to share with our audience about this darling little girl, Phoebe from Manchester, England that has to do with this Dell Technologies partnership with Deloitte Detroit and 3D prosthetics. Can you share this story and what it meant about this partnership. >> Well we wanted to tell this story about Phoebe because we really wanted the audience to understand the innovation and all of what's done it with social good is really about the individual, You know, technology plays a key role but the face behind the technology and the innovation are people and you know, as you mention Phoebe is from Manchester, U.K. Her father wrote this blog about Phoebe's experience. Phoebe's aunt, Claire works for Deloitte. She had access to a lot of what they could do in terms of 3D printing and basically came to Dell and we were able to take it and scale it and accelerate it and speed it up with a engineer by the name of Seamus who saw what the precision workstation could do. So it was this small idea to help an amazing little girl like this that has now turned into this movement around how do we more rapidly, quickly scale 3D prosthetics so these children and adults can have a chance at a normal life so. >> What kind of prosthetics did you guys build for her? >> It's an arm, so the very first arm that we built for her when she was about five years old had the frozen Disney theme painted on it. I asked her father Keith what is the one that she's wearing now because she's now this like really super cool seven-year-old that goes to school and all of her classmates and friends around her see her as this rock star and the one that she has today is printed with unicorns and rainbows. So if you know anything about seven-year-old girls, it's all about unicorns and rainbows and she's done an amazing thing and she's inspired so many other people around the world, individuals, customers, partners like Deloitte and others that we're working with to really take this to a whole new level. >> Karen, I think back to Dell you know, if you think back a couple of decades ago you know, drove a lot of the some of the waves of technology change you know, think back to the PC, but in the early days it was you know supply chain and simple ordering in all these environments and when I've watched Dell move into the enterprise, a lot of that is, I need to be listening to my customer, I need to be much closer to them because it's not just ordering your SKU and having it faster and at a reasonable price but there's a lot more customization. Can you talk about how you're kind of putting that center, that customer in the center of the discussion and that feedback loops that you have with them, how that's changed in Dell. >> Yeah sure, so all of the basic fundamentals around you got to order, deliver, make the supply chain work to deliver for our customers still matters but it's gone beyond that to your point and probably the best way to talk about it is these six customer award winners that we recognized last night. I've gotten to know all six of those over the last year and while they are doing amazing things from a digital transformation using technology in the travel business, the automotive business, banking, financial services, insurance, kind of across the board, the thing that they say consistently is look, we didn't always have the answer in terms of what we needed but you came in, you listened, you rolled up your sleeves to try to figure out how you could design a solution that would meet the needs that we have and they said, that's why you're one of the most strategic partners that we have. Now you can do all those other things, right? You can supply chain ride and build and produce and all that but it's the design of a solution that helps us do the things that will allow us to be differentiated and you look at that list of six customers and brands that they represent, right, Carnival Cruise Lines, USAA, Bradesco, McLaren I mean, the list kind of goes on, they are the differentiators out there and we're really honored to be able to be working with them. >> So we're only a day one and it's only just after lunchtime but one of the things I think somatically that I heard this morning in the keynote with Michael and Pat and Jeff and Satya and yourself is, it's all about people. A couple interviews I did earlier today, same sort of thing, it's like we had the city of Las Vegas on. This is all driven by the people in for the people so that sense of community is really strong. I also noticed this year's theme of real transformation, parlays off last year's theme of make it real, it being digital transformation, IT, security, workforce transformation, what are some of the things that were like at Dell Technologies. Cloud this morning for example, VMware Cloud on Dell EMC that you guys specifically heard say from last year's attendees that are manifesting in some of the announcements today and some of the great things the 15 or so thousand people here are going to get to see and feel and touch at this year's event? >> Well, Lisa you nailed it. What you heard on stage today is what customers have been telling us over the last year. We unveiled about a month ago with a very small group of CIOs in Amia, our cloud strategy, our portfolio, the things that we're going to be able to do and one customer in particular immediately chimed in and said, we need you in the cloud and we need you in there now because you offer choice, you offer open, you offer simplicity, you offer integration and they're like, there's just too many choices and a lot of them are expensive. So what you heard on stage is absolutely a manifestation of what they told us. The other pieces, look, I think I think the industry and CIOs are very quickly realizing their workforce matters, making them happy and productive matters having them enabled that they can work flexibly wherever they want to really, really matters and you know, our Unified Workspace ONE solution is all about how we help them simplify, automate, streamline that experience with their workforce so their employees stick around. I mean, there's a war on talent and everybody's dealing with it and that experience is really, really important in particular to the gensies and the millennials. >> Karen, I love that point. Actually, I was really impressed this morning. In the press and analyst session this morning, there was a discussion of diversity and inclusion and the thing that I heard is, it's a business imperative, it's not, okay it's nice to do it or we should do it but no, this is actually critical to the business. Can you talk about what that means and what you hear from your customers and partners. >> Yes, yes, well, we're seeing it in spades and all of these technology jobs that are open, right. So look, all the research has shown that if you build a diverse team, you'll get to a more innovative solution and people generally get that but what they really get today is here in the U.S. alone, there's 1.1 million open technology jobs by the year 2024, half of them, half of them are going to be filled by the existing workforce. So there is this war in talent that is going to get bigger and bigger and bigger and I think that's what really has given a wake-up call to corporations around why this matters. I think the other piece that we're starting to see, not just around diversity but in our other social impact priorities around the environment as well as how we use our technology for good, look, customers want to do business with a corporation that has a soul and they stand for something and they're doing something, not just a bunch of talking heads but where it's really turning into action and they're being transparent about the journeys and where they're at with it. So it matters now to the current generation, the next generation, it matters to business leaders, matters to the financial services community, which you start to see you know, some of the momentum around you know, the black stones and state street. So it's really exciting that we're part of it and we're leading the way in a lot of number of areas. >> And it's something to that we talked about a lot on theCUBE, diversity and inclusion from many different levels, one of them being the business imperative that you talked about, the workforce needing to compete for this talent, but also how much different products and technologies and apps and APS and things can be with just thought diversity in and of itself and I think it's refreshing to what Stu was saying, hey, we're hearing this is a business imperative but you're also seeing proof in the pudding. This isn't just, we've got an imperative and we're going to do things nominally, you're seeing the efforts manifest. One of the, Draper Labs who was one of the customer award winners. That video that was shown this morning struck probably everyone's heart with the campfire in Paradise California. >> Tragic. >> I grew up close to there and that was something that only maybe, I get goosebumps, six months ago, so massively devastating and we think you know, that was 2018 but seeing how Dell Technologies is enabling this laboratory to investigate the potential toxins coming from all of this chart debris and how they're working to understand the social impact to all of us as they rebuild, I just thought it was a really nice manifestation of a social impact but also the technology breadth and differentiation that Dell has enabling. >> That was also why this story today was so great about Phoebe, right because it's where you can connect the human spirit with technology and scale and have an even bigger impact and there's so much that technology can help with today. You know, that that story about Phoebe. From the time that her aunt from Deloitte identified, you know, what we could do, all the way to the time that Phoebe got her first arm was less than seven months, seven months and you think about you know, some of the other prototypes that were out there, times would take years to be able to do it. So I love that you know, connection of human need with the human spirit and connecting and inspiring and motivating so many children and adults around the world. >> And what what are some of the next, speaking of Phoebe and the Deloitte digital 3D prosthetics partnership, what are some of the other areas we're going to see this technology that this little five-year-old from Manchester spurned. >> Well, I'll give you another example. So we, there was an individual in India, actually an employee of ours that designed an application to help figure out how to deploy healthcare monitoring in some of the remote villages in India where they don't have access to basic things that we take for granted. Monitoring your blood pressure, right, checking your cholesterol level and he created this application that a year later now, we have given kind of the full range of the Dell portfolio technology suite. So it is you know our application plus Pivotal plus VMware plus Dell EMC combined with the partnering that we've done with Tata Trust and the State of India, we've now deployed this healthcare solution called Life Care Solution to nearly 37 million rural residents, citizens in India. >> Wow 37 million. >> 37 million, so a small idea, you take from a really passionate individual, a person, a human being and figure out how you can really leverage that across the full gamut of what Dell can do, I think the results are incredible. >> Awesome, you guys also have a Women in Technology Executive Summit that you're hosting later this week. Let's talk about that in conjunction of what we talked a minute ago about, it's a business imperative as Stu pointed out, there are tangible, measurable results, tell us about this. >> Well, I'm kind of done honestly with a lot of the negativity around, oh, we're not making any progress, oh, we need to be moving fast and if you look at the amount of effort, energy and focus that is going into this space by so many companies and the public sector, it's remarkable and I've met a number of these CIOs over the last year or two, so we basically said let's invite 20 of them, let's share our passion, have made progress, care about solving this across their organization. A lot of us are working on the same things so if we simply got in a room and figured out, are their power in numbers and if we worked collectively together, could we accelerate progress. So that's what it's all about. So we have about 15 or 20 CEOs, both men and women and we'll be spending you know, six or seven hours together and we want to walk away with one or two recommendations on some things that we could collaborate on and have a faster, bigger impact. >> And I heard that, you mentioned collaboration, that's one of the vibes I also got from the keynote this morning when you saw Michael up there with Pat and Jeff and Satya, the collaboration within Dell Technologies, I think even talking with Stu and some of the things that have come out and that I've read, it seems to be more symbiosis with VMware but even some the, like I said, we're only in, I wouldn't even say halfway through day one and that is the spirit around here. We talk about people influence, but this spirit of collaboration is very authentic here. You are the first chief customer officer for Dell, if you look back at your tenure in this role, could you have envisioned where you are now? >> No, because it was like the first ever chief customer officer at Dell and you know, it really gave me a unique opportunity to build something from scratch and you know, there's been a number of other competitors as well as other companies that have announced in the last year or so the need to have a chief customer officer, the need to figure out how, which is a big remit of mine across Dell Technologies, how do we how do we eliminate the silos and connect the seams because that's where the value is going to be unlocked for our customers. That's what you saw on stage today. You saw the value of that with Jeff, with Pat, with Satya, some you know, one of our most important partners out there. Our customers don't want point solutions, they want them to be integrated, they want it to be streamlined, they don't be automated, they want us to speed time to value, they want us to streamline a lot of the back-office kind of mundane things that they're like, I don't want my people spending their time anymore and doing that and that's where we see Dell Technologies being so much more differentiated from other choices in the market. >> Yep, I agree with you. Well Karen, thank you so much for joining Stu and me on theCUBE this afternoon, sharing some of the stories, look forward to hearing next year what comes out of this year's as Women in Tech Exec Summit. Thank you so much for your time. >> Thank you very much, thank you. >> with Stu Miniman, I'm Lisa Martin, you're watching theCUBE, live day one of Dell Technology World from Las Vegas, thanks for watching. (light electronic music)
SUMMARY :
Brought to you by Dell Technologies There's about 4,000 of the Always great to be with you all. So one of the things you and you know, as you mention Phoebe is and the one that she has today is printed a lot of that is, I need to and probably the best way to talk about it and some of the great things the 15 and said, we need you in the cloud and what you hear from your and people generally get that that you talked about, the and we think you know, that was 2018 and adults around the world. and the Deloitte digital Trust and the State of India, that across the full gamut Awesome, you guys also have a and the public sector, it's remarkable and that is the spirit around here. and connect the seams sharing some of the stories, of Dell Technology World from Las Vegas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Claire | PERSON | 0.99+ |
Karen | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Karen Quintos | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Bradesco | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
USAA | ORGANIZATION | 0.99+ |
Tata Trust | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Phoebe | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
six | QUANTITY | 0.99+ |
McLaren | ORGANIZATION | 0.99+ |
Carnival Cruise Lines | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
seven months | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Michael | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
2024 | DATE | 0.99+ |
first arm | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
six customers | QUANTITY | 0.99+ |
Satya | PERSON | 0.99+ |
next year | DATE | 0.99+ |
a year later | DATE | 0.99+ |
Manchester, England | LOCATION | 0.99+ |
37 million | QUANTITY | 0.99+ |
Draper Labs | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
less than seven months | QUANTITY | 0.99+ |
U.S. | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Manchester | LOCATION | 0.99+ |
six months ago | DATE | 0.98+ |
last night | DATE | 0.98+ |
Women in Tech Exec Summit | EVENT | 0.98+ |
this year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Manchester, U.K. | LOCATION | 0.98+ |
Keynote Analysis | Fortinet Accelerate 2019
>> Announcer: Live from Orlando, Florida it's theCUBE covering Accelerate19. Brought to you by Fortinet. >> Welcome to theCUBE's coverage of Fortinet Accelerate 2019 live from Orlando, Florida. I'm Lisa Martin with Peter Burris. Peter, it's great to be with you our third year co-hosting Accelerate together. >> Indeed, Lisa. >> So we moved from, they've moved from Vegas to Orlando, hence we did so we had a little bit of a longer flight to get here. Just came from the Keynote session. We were talkin' about the loud music kind of getting the energy going. I appreciated that as part of my caffeination (laughs) energy this morning but a lot of numbers shared from Fortinet Accelerate. 4,000 or so attendees here today from 40 different countries. They gave a lot of information about how strong their revenue has been, $1.8 billion, up 20% year on year. Lots of customers added. What were some of the takeaways from you from this morning's keynote session? >> I think it's, I got three things, I think, Lisa. Number one is that you've heard the expression, skating to where the puck's going to go. Fortinet is one of those companies that has succeeded in skating to where the puck is going to go. Clearly cloud is not a architectural or strategy for centralizing computing. It's a strategy for, in a controlled coherent way, greater distribution of computing including all the way out to the edge. There's going to be a magnificent number of new kinds of architectures created but the central feature of all of them is going to be high performance, highly flexible software-defined networking that has to have security built into it and Fortinet's at the vanguard of that. The second thing I'd say is that we talk a lot about software defined wide-area networking and software-defined networking and software-defined infrastructure and that's great but it ultimately has to run on some type of hardware if it's going to work. And one of the advantages of introducing advanced ACICS is that you can boost up the amount of performance that your stuff can run in and I find it interesting that there's a clear relationship between Fortinet's ability to bring out more powerful hardware and its ability to add additional functionality within its own stack but also grow the size of its ecosystem. And I think it's going to be very interesting over the next few years to discover where that tension is going to go between having access to more hardware because you've designed it and the whole concept of scale. My guess is that Fortinet's growth and Fortinet's footprint is going to be more than big enough to sustain its hardware so that it can continue to drive that kind of advantage. And the last thing that I'd say is that the prevalence and centrality of networking within cloud computing ultimately means that there's going to be a broad class of audiences going to be paying close attention to it. And in the Keynotes this morning we heard a lot of great talk that was really hitting the network professional and the people that serve that network professional and the security professional. But Fortinet's going to have to expand its conversation to business people and explain why digital business is inherently a deeply networked structure and also to application developers. Fortinet is talking about how the network and security are going to come together which has a lot of institutional and other implications but ultimately that combination of resources is going to be very attractive to developers in the long run who don't necessarily like security and therefore security's always been a bull time. So if Fortinet can start attracting developers into that vision and into that fold so the network, the combined network security platform, becomes more developer-friendly we may see some fascinating new classes of applications emerge as a consequence of Fortinet's hardware, market and innovation leadership. >> One of the things that they talked about this morning was some of the tenets that were discussed at Davos 2019 just 10 weeks ago. They talked about education, ecosystem and technology, and then showed a slide. Patrice Perche, the executive senior vice president of sales said, hey we were talking about this last year. They talked about education and what they're doing to not only address the major skills gap in cybersecurity, what they're doing even to help veterans, but from an education perspective, rather from an ecosystem perspective, this open ecosystem. They talked about this massive expansion of fabric-ready partners and technology connector partners as well as of course the technology in which Ken Xie, CEO and founder of Fortinet, was the speaker at Davos. So they really talked about sort of, hey, last year here we were talking about these three pillars of cybersecurity at the heart of the fourth industrial revolution and look where we are now. So they sort of set themselves up as being, I wouldn't say predictors of what's happening, but certainly at the leading edge, and then as you were talking about a minute ago, from a competitive perspective, talked a lot this morning about where they are positioned in the market against their competitors, even down from the number of patents that they have to the number of say Gartner Magic Quadrants that they've participated in so they clearly are positioning themselves as a leader and from the vibe that I got was a lot of confidence in that competitive positioning. >> Yeah and I think it's well deserved. So you mentioned the skills gap. They mentioned, Fortinet mentioned that there's three and a half million more open positions for cybersecurity experts than there are people to fulfill it and they're talking about how they're training NSEs at the rate of about, or they're going to, you know, have trained 300,000 by the end of the year. So they're clearly taking, putting their money where their mouth is on that front. It's interesting that people, all of us, tend to talk about AI as a foregone conclusion, without recognizing the deep interrelationship between people and technology and how people ultimately will gate the adoption of technology, and that's really what's innovation's about is how fast you embed it in a business, in a community, so that they change their behaviors. And so the need for greater cybersecurity, numbers of cybersecurity people, is a going to be a major barrier, it's going to be a major constraint on how fast a lot of new technologies get introduced. And you know, Fortinet clearly has recognized that, as have other network players, who are seeing that their total addressable market is going to be shaped strongly in the future by how fast security becomes embedded within the core infrastructure so that more applications, more complex processes, more institutions of businesses, can be built in that network. You know there is one thing I think that we're going to, that I think we need to listen to today because well Fortinet has been at the vanguard of a lot of these trends, you know, having that hardware that opens up additional footprint that they can put more software and software function into, there still is a lot of new technology coming in the cloud. When you start talking about containers and Kubernetes, those are not just going to be technologies that operate at the cluster level. They're also going to be embedded down into system software as well so to bring that kind of cloud operating model so that you have, you can just install the software that you need, and it's going to be interesting to see how Fortinet over the next few years, I don't want to say skinnies up, but targets some of its core software functionality so that it becomes more cloud-like in how it's managed, its implementations, how it's updated, how fast patches and fixes are handled. That's going to be a major source of pressure and a major source of tension in the entire software-defined marketplace but especially in the software-defined networking marketplace. >> One of the things Ken Xie talked about cloud versus edge and actually said, kind of, edge will eat the cloud. We have, we live, every business lives in this hybrid multi-cloud world with millions of IoT devices and mobile and operational technology that's taking advantage of being connected over IP. From your perspective, kind of dig into what Ken Xie was talking about with edge eating cloud and companies having to push security out, not just, I shouldn't say push it out to the edge, but as you were saying earlier and they say, it needs to be embedded everywhere. What are your thoughts on that? >> Well I think I would say I had some disagreements with him on some of that but I also think he extended the conversation greatly. And the disagreements are mainly kind of nit-picky things. So let me explain what I mean by that. There's some analyst somewhere, some venture capitalist somewhere that coined the term that the edge is going to eat the cloud, and, you know, that's one of those false dichotomies. I mean, it's a ridiculous statement. There's no reason to say that kind of stuff. The edge is going to reshape the cloud. The cloud is going to move to the edge. The notion of fog computing is ridiculous because you need clarity, incredible clarity at the edge. And I think that's what Ken was trying to get to, the idea that the edge has to be more clear, that the same concepts of security, the same notions of security, discovery, visibility, has to be absolutely clear at the edge. There can be no fog, it must be clear. And the cloud is going to move there, the cloud operating model's going to move there and networking is absolutely going to be a central feature of how that happens. Now one of the things that I'm not sure if it was Ken or if was the Head of Products who said it, but the notion of the edge becoming defined in part by different zones of trust is, I think, very, very interesting. We think at Wikibon, we think that there will be this notion of what we call a data zone where we will have edge computing defined by what data needs to be proximate to whatever action is being supported at the edge and it is an action that is the central feature of that but related to that is what trust is required for that action to be competent? And by that I mean, you know, not only worrying about what resources have access to it but can we actually say that is a competent action, that is a trustworthy action, that agency, that sense of agency is acceptable to the business? So this notion of trust as being one of the defining characteristics that differentiates different classes of edge I think is very interesting and very smart and is going to become one of the key issues that businesses have to think about when they think about their overall edge architectures. But to come back to your core point, we can call it, we can say that the edge is going to eat the cloud if we want to. I mean, who cares? I'd rather say that if software's going to eat the world it's going to eat it at the edge and where we put software we need to put trust and we need to put networking that can handle that level of trust and with high performance security in place. And I think that's very consistent with what we heard this morning. >> So you brought up AI a minute ago and one of the things that, now the Keynote is still going on. I think there's a panel that's happening right now with their CISO. AI is something that we talk about at every event. There are many angles to look at AI, the good, the bad, the ugly, the in between. I wanted to get your perspective on, and we talked about the skills gap a minute ago, how do you think that companies like Fortinet and that their customers in every industry can leverage AI to help mitigate some of the concerns with, you mentioned, the 3.5 million open positions. >> Well there's an enormous number of use cases of AI obviously. There is AI machine learning being used to identify patterns of behavior that then can feed a system that has a very, very simple monitor, action, response kind of an interaction, kind of a feedback loop. So that's definitely going to be an important element of how the edge evolves in the future, having greater, the ability to model more complex environmental issues, more complex, you know, intrinsic issues so that you get the right action from some of these devices, from some of these censors, from some of these actuators. So that's going to be important and even there we still need to make sure that we are, appropriately, as we talked about, defining that trust zone and recognizing that we can't have disconnected security capabilities if we have connected resources and devices. The second thing is the whole notion of augmented AI which is the AI being used to limit the number of options that a human being faces as they make a decision. So that instead of thinking about AI taking action we instead think of AI, taking action and that's it, we think of AI as taking an action on limiting the number of options that a person or a group of people face to try to streamline the rate at which the decision and subsequent action can get taken. And there, too, the ability to understand access controls, who has visibility into it, how we sustain that, how we sustain the data, how we are able to audit things over time, is going to be crucially important. Now will that find itself into how networking works? Absolutely because in many network operating centers, at least, say, five, six years ago, you'd have a room full of people sitting at computer terminals looking at these enormous screens and watching these events go by and the effort to correlate when there was a problem often took hours. And now we can start to see AI being increasingly embedded with the machine learning and other types of algorithms level to try to limit the complexity that a person faces so you can the better response, more accurate response and more auditable response to potential problems. And Fortinet is clearly taking advantage of that. Now, the whole Fortiguard Labs and their ability to have, you know, they've put a lot of devices out there. Those devices run very fast, they have a little bit of additional performance, so they can monitor things a little bit more richly, send it back and then do phenomenal analysis on how their customer base is being engaged by good and bad traffic. And that leads to Fortinet becoming an active participant, not just at an AI level but also at a human being level to help their customers, to help shape their customer responses to challenges that are network-based. >> And that's the key there, the human interaction, 'cause as we know, humans are the biggest security breach, starting from basic passwords being 1, 2, 3, 4, 5, 6, 7, 8, 9. Well, Peter-- >> Oh, we shouldn't do that? >> (laughs) You know, put an exclamation point at the end, you'll be fine. Peter and I have a great day coming ahead. We've got guests from Fortinet. We've got their CEO Ken Xie, their CISO Phil Quade is going to be on, Derek Manky with Fortiguard Labs talking about the 100 billion events that they're analyzing and helping their customers to use that data. We've got customers from Siemens and some of their partners including one of their newest alliance partners, Symantec. So stick around. Peter and I will be covering Fortinet Accelerate19 all day here from Orlando, Florida. For Peter Burris, I'm Lisa Martin. Thanks for watching theCUBE. (techno music)
SUMMARY :
Brought to you by Fortinet. Peter, it's great to be with you our third year kind of getting the energy going. And I think it's going to be very interesting One of the things that they talked about this morning and it's going to be interesting to see how Fortinet it needs to be embedded everywhere. that the edge is going to eat the cloud, and one of the things that, and their ability to have, you know, And that's the key there, the human interaction, and helping their customers to use that data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Patrice Perche | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Ken Xie | PERSON | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Siemens | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
$1.8 billion | QUANTITY | 0.99+ |
Derek Manky | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Ken | PERSON | 0.99+ |
Fortiguard Labs | ORGANIZATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
300,000 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
4,000 | QUANTITY | 0.99+ |
Phil Quade | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
40 different countries | QUANTITY | 0.99+ |
third year | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
10 weeks ago | DATE | 0.99+ |
three and a half million | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.98+ |
second thing | QUANTITY | 0.98+ |
Fortinet Accelerate | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
millions | QUANTITY | 0.98+ |
three things | QUANTITY | 0.97+ |
100 billion events | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
a minute ago | DATE | 0.95+ |
six years ago | DATE | 0.94+ |
five | DATE | 0.94+ |
20% | QUANTITY | 0.94+ |
three pillars | QUANTITY | 0.94+ |
this morning | DATE | 0.93+ |
fourth industrial revolution | EVENT | 0.92+ |
Davos 2019 | EVENT | 0.91+ |
3.5 million open | QUANTITY | 0.87+ |
Keynote | EVENT | 0.83+ |
theCUBE | ORGANIZATION | 0.83+ |
Accelerate | ORGANIZATION | 0.78+ |
next few years | DATE | 0.77+ |
Number one | QUANTITY | 0.75+ |
CEO | PERSON | 0.7+ |
Ashesh Badani, Red Hat | KubeCon 2018
>> Live from Seattle, Washington, it's the Cube, covering KubeCon and Cloud Native Con North America 2018. Brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome back everyone. We are live in Seattle for KubeCon 2018, Cloud Native Con. It's the Cube, I'm John Furrier, your host with Stu Miniman. Our next guest is Ashesh Badani, who is the Vice-President and General Manager of Cloud Platforms at Red Hat. Great to see you, welcome back to the Cube. >> Thanks for having me on. Always good to be back. >> So you guys, again, we talk every year with you. It's almost like a check-in. So what's new? You got some big, obviously, the news about the IBM. We don't really want to get into that detail. I know you just a stop on that because it's already out there. But you guys had great success with platformers of service. Now you got the growth of Kubecon and Cloud Native Con, 8000 attendees and users. There's uptake. What's the update on the Red Had side? >> Yeah, we're excited. Excited to be back at Kubecon. It's bigger and better than it's ever been, I think so. That's fantastic. We've been investing in this community for over four years now, since 2014. Really, from the earliest days. Based the entire platform on it. Continue growing that, adding lots of customers across the world. And I think what's really been gratifying for us to see is just the diversity of participants. Both in user perspective as well as the wider ecosystem. So whether you're a storage player, a networking player, management, marketing, what have you. Everything sort of building around this ecosystem. I think we're creating a great amount of value and we're seeing diverse applications being built. >> So you guys have been good then on (mumbles), good timing, a lot of things are going on. This show is an open-source community, right. And that's been a great thing. This is kind of where the end users come from. But two other personas come in that we're seeing participate heavily. The IT pro, the IT expert, and then the classic developer. So you have kind of a melting pot of how this is kind of horizontally connecting. You guys have been successful in the IT side. Where is this impacting the end users?6 How is this open-source movement impacting IT, specifically, and at the end of the day, the developers who are writing code? Have to get more stuff out. What's your thoughts? >> So, we hosted OpenShift Commons yesterday. OpenShift Commons, for the the folks who don't know, is our gathering of participants within the larger OpenShift community. We had lots of end users come and talk about the reason they're adopting a Kubernetes-based platform is to get greater productivity. So for example, if you're someone like Progressive Insurance, an established organization, how do you release applications quicker? How do you make your developers more productive? How do you enable them to have more languages, tools, frameworks at their disposal? To be able to compete in this world where you've got start-ups, you've got other companies trying to compete aggressively with you. I think it's a big dent here, right? It's not just for if you work traditional IT. But it's for if you were a company of all sizes. >> When you talk about customers, every customer is different. You've got, you look at IT, everything is additive, it tends to be a bit of a heterogeneous mess when you get there. Help connect for us what are you hearing from customers? How does, not just Kubernetes, but everything going on here in the Cloud Native environment? How is it helping them? How is it changing the way that they do their business and how's Red Hat involved? >> So one thing we've been noticing is that Hybrid Cloud is here and here to stay. So we've consistently been hearing this from customers. They've invested lots of money and time and energy, skills, in their existing environments. And they want to take advantage of public clouds. But they want to do that with flexibility, with portability, to bring to bear. What we've been trying to do is focus on exactly that. How do we help solve that problem and provide an abstraction. How do you provide primitives. So, for example, we announced our support of Knative, and how we'll make that available as part of OpenShift. Why's that? Well, how can we provide Serverless primitives within the platform so folks can have the flexibility to be able to adopt next-generation technologies. But to be able to do that consistently regardless of where they deploy. >> So, I love that. Talk about meeting the customers there. One of the things that really strikes me, there's so much change going on in the industry. And that's an area that Red Hat has a couple decades of experience. Maybe help explain how Red Hat in bringing some of that enterprise, oversight. Just like they've done for Linux for a long time. >> Yeah, yeah. Stu, you're following us very closely, as are you John, and the team at the Cube. We're trying to embrace that change as it comes upon us. So, I think the last time I was here, I was here with Alex Polvi of Core OS. Red Hat acquired Core OS in January. >> Big deal. >> Yeah, big acquisition for us. And now we're starting to see the fruits of some of that labor. In terms of integrating that technology. Why did we do that? We wanted to get more automation into the platform. So, customers have said, hey, look, I want these clusters to be more self-managing, self-healing. And so we've been really focused on saying how can we take those challenges the customers have, bring that directly into a platform so they're performing more and more like the expectations that they have in the public cloud, but in these diverse, introgenous, environments. >> That speaks to the operating model of cloud. You guys have a wholistic view because you're Red Hat. You got a lot of customers. You have the Dev House model, you got the Kubernetes container orchestration, micro-services. How does that all connect together for the customer? I mean, is it Turn Key and Open Shift? You guys had that nice bet with Core OS, pays big, huge dividends. What are some of those fruits in the operating model? So the customer has to think about the systems. It's a systems model, it's an operating system, so-to-speak. But they still got to develop and build apps. So you got to have a systems-wholistic view and be able to deliver the value. Where does it all connect? What's your explanation? >> So distributed systems are complex. And we're at the point where no individual can keep track of the hundreds, the thousands, the hundred-thousand containers that are running. So, the only way, then, to do it is to be able to say, how can the system be smart? So, at the Commons yesterday we had sort of a tongue-in-cheek slide that said, the factory of the future will only have two employees, a man and a dog. The man's there to feed the dog, and the dog's in place to ensure the man doesn't go off and actually touch the equipment. And the point really being, how can we bring technology that can bring that to bare. So, one example of that is actually through our Core OS acquisition. The Core OS team was working on a technology called, operators. Which is to say, how can we take the human knowledge that exists. To take complex software that's built by third parties and bring that natively into the platform and then have the platform go and manage them on behalf of the actual customer itself. Now we've got over 60 companies building operators. And we've, in fact, taken entire open-shift platforms, put operators to work. So it's completely automated and self-managed. >> The trend of hybrid is hot. You mentioned it's here to stay. We would argue that it's going to be a gateway to multi-cloud. And as you look at the stacks that are developing and the choices, the old concept of a stack-- and Chris was on earlier, the CTO of CNCF. And I kind of agree with him. The old notion of stack is changing because if you've got a horizontal, scale-able cloud framework, you got specialty with machine learning at the top, you got a whole new type of stack model. But, multi-cloud is what the customers want choice for. Red Hat's been around long enough to know what the multi-vendor word was years ago. Multi-vendor choice, multi-cloud choice. Similar paradigms happening now. Modern version of multi-vendor is multi-cloud. How do you guys see the multi-cloud evolution? >> So we keep investing and helping to make that a reality. So, last week, we made some announcements around Open Shift dedicators. Open Shift dedicators is the Open Shift manage service, or AWS. Open Shift is available in ways where it can be self-managed directly by customers in a variety of environments. Directly run around any public cloud or open stack, or what you'd like environment. We have third-party partners. For example, DXC D-systems providing managed versions of Open Shift. And then you can have Red Hat managed Open Shift for you. For example, on AWS, or coming next year, with Microsoft. Through our partnership for Open Shift on Azure. So you as a customer now have, I think, more choice than you ever had before. In terms of adopting Dev-Ops or dealings with micro-services. But then having flexibility with regard to taking advantage of tools, services, that are coming from, pretty much, every corner of IT industry. >> You guys have a huge install base. You've been servicing customers for many, many years, decades. Highest level support. Take us through what a customer, a traditional Red Hat customer that might not be fully embracing the cloud in the past, now is on-boarding to the cloud. What's the playbook? What do you guys offer them? How do you engage with them? What's the playbook? Is it, just buy Open Shift? Is there a series of-- how do you guys bring that Red Hat core Lenux customer that's been on Prim. Maybe a little bit out of shadow IT in the cloud, saying, hey, we're doing additional transformation. What's the playbook? >> So, great question, John. So, first fall into the transformation might be an over-hyped term. Might be a peak hype at this point in time. But I think that the bigger point from my perspective is how do you move more dollars, more euros, more spend towards innovation. That's what every company is sort of trying to do. So, our focus is, how can we build on the investments that they've made? At this point in time, (mumbles) Lenux probably has 50,000 customers. So, pretty much, every customer, any size, around the world, is some kind of Lenux user. How can we then say, how can we now provide you a platform to have greater agility and be able to develop these services quicker? But, at the same time, not forget the things that enterprises care about. So, last week we had our first big security issue released on Kubernetes. The privilege escalation flaw. And so, obviously, we participate in the community. We had a bunch of folks, along with others addressing that, and then we rolled our patches. Our patch roll-out went back all the way to version 3.2, 3.2 shipped in early 2016. Now, the one hand you say, hey, everyone has Dev-Ops, why do you need to have a patch for something that's from 2016? That's because customers still aren't moving as quickly as we'd like. So, I just want to temper, there's an enthusiasm with regard to, everyone's quick, everything's lightning fast. At the same time, we often find-- and so, going back to your question, we often find some enterprises will just take a little bit longer, in reality to kind of get-- (both speaking at once) >> Work loads, they're not going to be moving overnight. >> That's right. >> So there's some legacy from those workloads. >> Right, right. And so, what we want to do is ensure, for example, the platform. So we talked about the security and lifecycle. But, is supporting these Cloud Native, next generation, stateless applications, but also established legacy stateful applications all on the same platform. And so the work we're doing is ensure we don't-- you know, it's like, leave no application behind. So, either the work that we'll do, for example, with Red Hat Innovation Labs. We help sort of move that forward. Or with GSIs, global integrated, real integrators to bring those to bare. >> Ashesh, wonder if we could drill a little bit. There's a lot of re-training that needs to happen. I've been reading lots on there. It's not, oh, I bring in this new Cloud Native team that's just going to totally re-vamp it and take my old admins and fire them all. That's not the reality. There's not enough training people to do all of this wonderful stuff. We see how many people are at this show. Explain what Red Hat's doing. Some of the training maturation, education paths. >> So we do a lot of work on the just core training aspect, learning services, get folks up to speed. There's work that happens, for example, in CNCF. But we do the same thing around certifications, around administering the systems, developing applications, and so on. So that's one aspect that needs to be learned. But then there's another aspect with regard to how do we get the actual platform, itself, to be smart enough to do things, that in the past, individual people had to do? So, for example, if we were to sort of play out the operator vision fully and through execution. In the past, perhaps you needed several database admins. But, if you had operators built for databases, which, for example couch, base, and mongo, and others, have built out. You can now run those within the platform and then that goes and manages on behalf. Now you don't need as many database admins, you free those people up now to build actual business innovation value. So, I think what we're trying to do is increasingly think about how we sort of, if you will, move value up the stack to free up resources to kind of work on building the next generation of services. And I think that's our business transformation work. >> And I think, even though digital transformation is totally over-hyped, which I agree, it actually is really relevant. Because I think the cloud wave, right now, has been certainly validated. But what's recognized is that, people have to re-imagine how they do their infrastructure. And IT is programmable. You're seeing the network. The holy trinity of IT is storage, networking, and compute. So, when you start thinking about that in a way that's cloud-based, it's going to require them to, I don't want to say re-platform, but really move to an operating-environment that's different, that they used to have. And I think that is real. We're seeing evidence of that. With that in mind, what's next? What do you guys got on the horizon? What's the momentum here? What's the most important story that you guys are telling here at Red Hat? And what's around the corner? >> Yeah, so obviously, I talked about a few announcements that we made right around Open Shift Dedicated and the upgrades around that. And things like, for example, supporting bring-your-own-cloud. So, if you got your own Amazon security credentials, we help support that. And manage that on your behalf, as well. We've talked this week about our support native, trying to introduce more server-less technologies into Open Shift. We announced the contribution of SCD to the Cloud Native Computing Foundation. So, continuing re-affirming our commitment to the community I think looking ahead, going forward, our focus next year will be on Open Shift four, which will be the next release of the platform. And there, it's all about how do we give you a much better install than upgrade experience than you've had before? How do we give you these clusters that you can deploy in multiple different environments and manage that better for you? How do we introduce operators to bring more and more automation to the platform? So, for the next few months our focus is on creating greater automation in the platform and then enabling more and more services to be able to run on that. >> Pretty exciting for you guys riding the wave, the cloud wave. Pretty dynamic. A lot of action. You've guys have had great success, congratulations. >> Thank you very much. >> You're fun to watch. The Cube coverage here. We're in Seattle for KubeCon 2018 and Cloud Native Con. I'm John your host. Stay with us for more coverage of day one of three days of coverage after this short break. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by Red Hat, It's the Cube, I'm John Furrier, your host with Stu Miniman. Always good to be back. You got some big, obviously, the news about the IBM. adding lots of customers across the world. and at the end of the day, OpenShift Commons, for the How is it changing the way so folks can have the flexibility One of the things that really strikes me, as are you John, and the team at the Cube. have in the public cloud, So the customer has to and bring that natively into the platform and the choices, Open Shift dedicators is the in the past, Now, the one hand you say, going to be moving overnight. So there's some legacy And so the work we're Some of the training In the past, perhaps you What's the momentum here? So, for the next few months our focus the cloud wave. You're fun to watch.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Polvi | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
two employees | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lenux | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
Open Shift | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Red Hat Innovation Labs | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
2016 | DATE | 0.99+ |
50,000 customers | QUANTITY | 0.99+ |
Kubecon | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Ashesh | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
three days | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Cloud Native Con. | EVENT | 0.98+ |
2014 | DATE | 0.98+ |
early 2016 | DATE | 0.98+ |
Seattle, Washington | LOCATION | 0.98+ |
KubeCon 2018 | EVENT | 0.98+ |
one example | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Both | QUANTITY | 0.97+ |
Stu | PERSON | 0.97+ |
both | QUANTITY | 0.97+ |
Cube | ORGANIZATION | 0.97+ |
SCD | ORGANIZATION | 0.97+ |
Cloud Native Con North America 2018 | EVENT | 0.97+ |
8000 attendees | QUANTITY | 0.97+ |
two other personas | QUANTITY | 0.97+ |
Open | TITLE | 0.96+ |
over four years | QUANTITY | 0.96+ |
thousands | QUANTITY | 0.96+ |
over 60 companies | QUANTITY | 0.96+ |
Progressive Insurance | ORGANIZATION | 0.96+ |
Linux | TITLE | 0.95+ |
Red Hat | ORGANIZATION | 0.95+ |
one aspect | QUANTITY | 0.95+ |
hundred-thousand containers | QUANTITY | 0.95+ |
Red Hat | TITLE | 0.94+ |
a dog | QUANTITY | 0.93+ |
Cloud Native Con | EVENT | 0.92+ |
this week | DATE | 0.92+ |
Core | TITLE | 0.92+ |
OpenShift | TITLE | 0.92+ |
Core OS | TITLE | 0.92+ |
a man | QUANTITY | 0.91+ |
Cloud Platforms | ORGANIZATION | 0.9+ |
Knative | ORGANIZATION | 0.89+ |
One | QUANTITY | 0.88+ |
Open Shift four | TITLE | 0.84+ |
Kubernetes | ORGANIZATION | 0.84+ |
day one | QUANTITY | 0.82+ |
Shift | TITLE | 0.81+ |
Jennifer Cloer, The Chasing Grace Project | Red Hat Summit 2018
>> Announcer: From San Francisco it's theCUBE. Covering Red Hat Summit 2018. Brought to you by Red Hat. >> Welcome back, everyone. We are here live in San Francisco, the Moscone West for the Red Hat Summit and we're covering three days of wall-to-wall coverage. I'm John Furrier with my co-host John Troyer. Our next guest is Jennifer Cloer, creator and executive producer of The Chasing Grace Project, formerly CUBE alumni, was on at the CloudNOW awards at Google. Great to see you. >> Great to see you, thanks for having me. >> So obvioulsy Open Source has been amazing growth, okay, and it has kind of democratized software. >> Right. >> You've got a project in my opinion that I think is democratizing, getting the word out on the tech issues around women in tech and more importantly, it's inspirational, but it's also informational. Take a minute and explain what is the project Chasing Grace? Obviously Grace, Grace Hopper. >> Right. Right, The Chasing Grace Project is a documentary series of six episodes about women in tech. The name does lend itself to Grace. We named it after Grace Hopper because she really exemplifies the grit and the excellence that we're all chasing all the time. It's also this idea that we're chasing the idea of grace in the face of adversity. It's not always easy but the women who we've interviewed and talked to exhibit amazing grace and are super inspiring. So the series doesn't shy away from adversity but it certainly focuses on stories of resilience. >> And when did you start the project and is there episodes? Is it on Netflix? >> Yes. >> Is it on DVD? >> (laughs) Let's hope. We hope so. We started the project, excuse me, about a year and a half ago. I put a call for stories out in a number of women in tech forums I belong to, was inundated with responses. Women are ready to share their stories. Spent every Friday for about four or five months on back-to-back calls with women, produced the trailer last May, a year ago, released it in September, and since then it's been a whirlwind. Lots of interest. Lots of men and women wanting to share their stories, as well as people wanting to underwrite the work, which is fabulous because it relies on sponsors. So yeah, we're about a year and a half in. We just finished episode one and screened it. We've got four or five more to go so we're early. We're early, but it's happening. >> And share some stories because I saw the trailer, it's phenomenal. There's women in tech and the culture of the bro culture, people talk about that all the time. It's male-dominated and you're seeing here with Red Hat Summit, there's women here but it's still dominated by men. >> Right. >> The culture has to evolve and I think a lot of men are smart and see it. Some aren't and some are learning. I would call learning a bigger (laughs) percentage. >> Sure. >> What are you finding that women who are really driving the change has been the big trend line? And how's the men reacting? Because the men have to be involved, too because they also have to take responsibility for the change. >> Absolutely, absolutely. I would say that by women sharing their stories we are starting to change culture. I'm actually keynoting today at the Women's Leadership lunch at Red Hat Summit. I'm going to talk about that, the impact of story on cultural change because there's a lot of reasons cited for the decline of women in tech, because we've gone backwards. There's actually fewer than ever before. But many things are cited. So the pipeline issue, poor education, but the biggest thing cited is the culture and the culture has changed over the course of the last decade in particular. So the women we've talked to, their stories of resilience are starting to change that culture. When people talk and share experiences and stories, there's empathy that comes from both men and women who hear those stories and I think that that starts to change culture. It's starting to happen. I think we are pivoting, it's happening. But there's still a lot of work to do. >> John Troyer: Jennifer, at the keynote, or at the luncheon here, the Women's Leadership luncheon, anything else that you'll be bringing up? That sounds like part of your message here that you're going to be bringing today and you want to share right before you go up? >> Yeah, sure. So like I said, I'll talk about the impact of story on culture. I'll talk about the stories of resilience. I'm going to share a few stories from women who we've actually interviewed and featured in episode one. Because you can't see episode one online because we're in discussions with distributors, I'm going to share those stories with this audience. And I think folks can, like I said, learn from those and gain empathy and walk away hopefully with action. >> That seems great. The storytelling of course is key, right? We're in an interesting place in our culture today and I think social media, the 10 or 20 years of social media that we've had is part of that. I know my feed is filled with incredible women leaders in tech and frankly it's much better for it. But you know, you do sense a sense of almost weariness in some folks because this is one, they get shit on, can I say that? >> Hey, it's digital TV, there's no censorship. >> But also you'd like to eventually, if you're a woman in tech, you'd like to be able to talk about tech, not just being a woman in tech. >> Right, right. >> I guess, is that just at the part, is that just where we are in society right now? >> I think so and you know, it's a marathon, not a sprint, right? It's going to take a long time. It took a long time to get us to this place, it's going to take a long time to move us forward. But yeah, women do want to build tech and not have to advocate for themselves. Hopefully projects like The Chasing Grace Project and other work that's happening out there, there's a lot of initiatives that have sprung up in the last few years, are helping to do that so that the women who are building can build. >> What's your big takeaway from the work you've done so far? It could be something that didn't surprise you that you knew was pretty obvious and what surprised you? What's some of the things that's come out of it that's personal learnings for you? >> I think the power that comes from giving women a platform to be seen and heard for their experiences. Almost every woman I've talked to says I feel so alone. They're in an office with mostly men. There might be another woman but they feel so alone and when they share their stories and they see other women sharing their stories, they know they're not alone. There may be few of them but the stories are very similar. I think that men learn a lot when they see women sharing their stories, too because they don't know. The experiences that we all have are very different. We're walking through the same industry but our day-to-day experiences are quite different. Learning what that's like, both for women, for men, there are men that are going to be featured in this series, and women of other women. Just the power in that. Most women tell me I don't really have a story. Well, you both know that when you dig a little bit, >> They all have stories. >> everybody has a story. Everybody has a story, multiple stories. So, yeah. >> So let me as you a question. This has come up in some of my interviews on women in tech and that is is that it kind of comes up subtlety, it's not really put out there, like you said, aggressively. But they say there's also a women women pressure. So how have you found that come up? Because it's not just women and men. I've heard women say there's pressure, there's other pressures from other women. Do more or do less and it's kind of an individual thing but it's also kind of code, as well to stick together. At the same time, there's a women and women dynamic. >> Yeah. >> What have you found on that? >> Mostly I've found, I think there's a shift happening, mostly I've found that women are forming community and supporting each other. Everyone has a different definition of feminism or womenism (laughs) as some women have called it, but I think there are some women who have told me, usually the older generations who have told me there's only room for one woman at the table. One woman makes it to leadership and she's very protective of that space. But we're seeing that less and less. >> I don't want to turn this into, you hate to turn this into a versus scenario, right? Especially online I see a lot of interaction of men coming up and saying, either trying to explain to women what their problem is or, but also saying educate me, like take your time to educate me because I can't be bothered to figure it out myself. Or also trying to stand up themselves and lead the charge. So one of my personal things I do, I sit back and let the women talk and listen to them about what they want to do. >> Right. >> Any particular advice you have for folks who are listening and who might want to, you know, what do you do? I guess sit down and pay attention. >> Yeah, I'd say listen to the stories. Listen to what women need and want out of their male allies and advocates. And listen to the women who you already are friends and colleagues with. What do they need from you? Start there. And then build your way out. I remember when I first started The Chasing Grace Project, I was actually advised by people, well don't feature men at all because they can't speak for women and that's very true but I've decided that we will feature both men and women because we're all part of the industry, right? When I talk about the future is being built by all of us. We need more women in leadership. We don't need just women in leadership, we need men and women. So I think though, right now at this moment in time men should listen and ask their, like I said, their inside circle of women that are friends and colleagues, what can I do? What do you need in terms of my support? >> And it's inclusion, too. There's a time to have certain, all women and then men, as well. >> Right. >> Kind of the right balance. >> Right. >> Well, I have to ask you obvioulsy, Red Hat is an Open Source world. Community is huge. Obviously tech has a community and some will argue how robust it is (laughs) >> Right. (laughs) >> and fair it is. And communities have their own personality, but the role of the community becomes super critical. Can you just share your thoughts and views of how the role of the community can up its game a bit on inclusion and diversity? And I put inclusion first because inclusion and diversity, that seems to be the trend in my interviews, diversity and inclusion, and now it's inclusion and diversity. But the community has some self-policing mechanisms. There's kind of a self-governance dynamic of communities. So it's an opportunity. >> It is an opportunity. >> So what's your view? >> There are a lot of things that are talked about within the Open Source community in terms of how to advance inclusion in a positive way. One is enforcement. So at events like this, there's a code of conduct. They've become very popular. Everybody has one, for good reason, but everybody's doing them now. I worked at The Linux Foundation for 12 years. When you have an incident at an event, if you don't enforce your code of conduct, it doesn't mean anything. So I think that's one very tangible example of something you can do. We certainly tried at The Linux Foundation, but I remember it was a challenge. If something happened, what was the level of issue and how would we enforce that and address it? So I think the community can do that. I think start there, yeah. >> What's your take on The Linux Foundation, since you brought it up? Lots going on there. >> Right. >> You've got CNCF is exploding in growth. >> Jennifer: Right. >> Part of that, Jim Zemlin is doing a great job. As you look at The Linux Foundation since you have the history, >> Yeah. >> where it's come from and where it's going, what's your view of that? >> My goodness. I was part of The Linux Foundation before it was called The Linux Foundation. It was called Open Source Development Labs, way, way, back. But you know, always impressed with what The Linux Foundation is doing. CNCF in particular is on fire. I watched my social media feeds last week about KubeCon in Copenhagen, a lot of friends there. You know, Open Source is the underpinning of society. If the world we live in is a digital one and we're building that digital existence for tomorrow, the infrastructure is Open Source. So it's just going to become more and more relevant. >> And they're doing a great job. And it's an opportunity with the community again to change things. >> Yeah. >> There's a good mindset in the Open Source community with Linux Foundation. Very growth-oriented, growth mindset. Love the vibe there. They've got good vibes. >> Yeah. >> They're very open and inclusive. >> There's some projects that are really prioritizing. DNI, one of which is Cloud Foundry Foundation. Abby Kearns is doing an amazing job there. The Node.js community I think is pretty progressive. So yeah, it's encouraging. >> Abby was on theCUBE. We were there in Copenhagen. >> Right, right. >> Thanks for coming on. >> My pleasure. >> What's next for you? Your life's a whirlwind. Take a quick minute. >> Yeah, I'm in Chicago next week for a shoot. We're shooting episode two which is focused on women in leadership roles. There's only 11% of executive positions in Silicon Valley are held by women. So it's a provocative topic because a lot of women haven't experienced that so we want more to do that. >> Well, if you need any men for the next show, John and I will happily volunteer. >> Okay, wonderful. >> To be stand-ins and backdrops. >> Fantastic, thank you. >> Thanks for coming on. It's theCUBE coverage here live, Moscone West in San Francisco for Red Hat Summit 2018. We'll be back with more coverage after this short break.
SUMMARY :
Brought to you by Red Hat. for the Red Hat Summit and So obvioulsy Open Source is the project Chasing Grace? So the series doesn't of women in tech forums I belong to, people talk about that all the time. The culture has to evolve Because the men have to be involved, too cited for the decline of women in tech, So like I said, I'll talk about the impact the 10 or 20 years of social media Hey, it's digital TV, to talk about tech, not so that the women who the stories are very similar. everybody has a story. my interviews on women in tech some women have called it, I sit back and let the women you know, what do you do? And listen to the women who you already There's a time to have certain, all women Well, I have to ask you obvioulsy, Right. of how the role of the of something you can do. since you brought it up? since you have the history, So it's just going to become to change things. in the Open Source community So yeah, it's encouraging. Abby was on theCUBE. Take a quick minute. because a lot of women men for the next show, and backdrops. Moscone West in San Francisco
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jennifer Cloer | PERSON | 0.99+ |
Jim Zemlin | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
Abby Kearns | PERSON | 0.99+ |
Copenhagen | LOCATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
September | DATE | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
12 years | QUANTITY | 0.99+ |
Grace Hopper | PERSON | 0.99+ |
six episodes | QUANTITY | 0.99+ |
Cloud Foundry Foundation | ORGANIZATION | 0.99+ |
Grace | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
four | QUANTITY | 0.99+ |
Abby | PERSON | 0.99+ |
KubeCon | EVENT | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
last week | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
next week | DATE | 0.99+ |
Red Hat Summit 2018 | EVENT | 0.99+ |
20 years | QUANTITY | 0.99+ |
One woman | QUANTITY | 0.99+ |
last May | DATE | 0.98+ |
The Linux Foundation | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Women's Leadership | EVENT | 0.98+ |
Grace Hopper | PERSON | 0.98+ |
three days | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
11% | QUANTITY | 0.97+ |
The Chasing Grace Project | TITLE | 0.97+ |
DNI | ORGANIZATION | 0.97+ |
Netflix | ORGANIZATION | 0.97+ |
CUBE | ORGANIZATION | 0.96+ |
one woman | QUANTITY | 0.96+ |
Chasing Grace | TITLE | 0.95+ |
about a year and a half ago | DATE | 0.95+ |
CloudNOW awards | EVENT | 0.95+ |
about four | QUANTITY | 0.94+ |
The Linux Foundation | ORGANIZATION | 0.94+ |
Moscone West | LOCATION | 0.93+ |
ORGANIZATION | 0.92+ | |
San Francisco | LOCATION | 0.92+ |
tomorrow | DATE | 0.9+ |
last decade | DATE | 0.89+ |
Women's Leadership luncheon | EVENT | 0.85+ |
five months | QUANTITY | 0.85+ |
episode one | QUANTITY | 0.83+ |
The Chasing Grace Project | TITLE | 0.81+ |
Open Source Development Labs | ORGANIZATION | 0.81+ |
One | QUANTITY | 0.77+ |
last few years | DATE | 0.76+ |
every Friday | QUANTITY | 0.75+ |
theCUBE | ORGANIZATION | 0.74+ |
episode | QUANTITY | 0.72+ |
episode two | OTHER | 0.67+ |
one | OTHER | 0.65+ |
John Allessio & Nick Hopman - Red Hat Summit 2017
>> Voiceover: Live from Boston, Massachusetts, it's the Cube covering Red Hat Summit 2017. Brought to you by Red Hat. >> Hi, I'm Stu Miniman and welcome back to the three days of live coverage here at Red Hat Summit 2017. The sixth key note of the week just wrapped up. Everybody's streamin' out. We've got a couple more segments. Happy to welcome back to the program a couple gentlemen we had on actually the Open Stack Summit. John Allessio, who'd the vice president of - And Nick Hopman, who's the senior director of Emerging Technology Practices, both with Red Hat. Gentlemen, great to see you again. >> Great to see you again Stu, good afternoon. >> Yeah, so a year ago you guys launched this idea of the Open Innovation Labs. We're opening these labs this year. You've got some customers. We actually had Optum on earlier in the week. We're going to have the easiER AG guys on, I should say - I was corrected earlier this week. I shouldn't say guys, actually I think it's two doctors, a man and a woman that are on. >> Andre and Dorothy. Andre and Dorothy - so really amazing customer testimonials for working through. So John, why don't you start with, you know, give us the update on the innovation lab program. Open and innovation get, you know, discussed a lot. Give us the real meat of what happens. >> So, just maybe a quick recap. >> Yeah. >> So Stu, we had about oh a year and a half ago or so, our strategic advisory board tell us, Red Hat, we really are looking for you to help show us the way in how to develop software, but also kind of help us leverage this culture that Red Hat has and developing software the Red Hat way. And so we worked with about a dozen clients across the globe, got a lot of great feedback on what they were looking for. We created an offering and then we launched it, as you said in Austin at Open Stack Summit. And now we've done many engagements in Europe and in North America across multiple different industries. We had here at the Summit this week actually two clients talk on the main stage, both Optum and easiER AG. And both of them have been through innovation lab engagements. Very different industries, very different clients, but what it has proven in both cases is it's really been a great way and a great catalyst to kind of spark innovation, whether it's within an existing IT infrastructure or building out some capability in particular customer environments, like we did with Optum, or kind of taking some ideas. And I'll let Dorothy and Andre tell their story when they come on and work with you. I don't want to take their thunder. But a great way to show you how we can work with a start up and really help them kind of take their vision and make it reality in an application. >> Yeah, Nick, you know, we've done so many interviews about the various pieces, lots of interesting business. It reminds me of that kind of pipelining that you talk about. One of the announcements this week was Open Shift IO, which it helps with kind of the application modernization. Can you maybe help us, you know, put together how the products that Red Hat does and what you're doing in the Open Innovation Labs, how do those go together and mesh and new stuff come in? >> It's actually kind of at the core of what we do anyway. So, we are building on top of the foundation, the technologies at Red Hat's core platform. But in a residency with Open Innovation Labs we are tying in other technologies, other things outside of the Stack. But with like Open Shift IO, what we've created was what we called the push button infrastructure. How are we showing with the process and everything to innovate on top of the Red Hat technology? How do we accelerate that journey? And so we created what was called the push button infrastructure to show that foundational acceleration, and Open Shift IO is actually now kind of part of that core. And adding in other components, other technologies that Red Hat has, whether it's our ISV partners, things in Open Shift commons, all those things to accelerate the application development experience. And so I think with Open Shift IO and as Red Hat continues to evolve in the development kind of tooling landscape, you're going to see how we are helping our customers do cloud data of application development more so than ever before. >> Yep, and maybe to add to that too, Nick, we were talking to a client this morning about some of their challenges and their priorities for this current physical year, And that particular client was talking about Jenkins and a number of non-Red Hat technologies as well because at the end of the day, our customers have Red Hat products, have non-Red Hat products. I think the great thing that maybe you can mention is when you look at that push button infrastructure that we've built, it's not really a Red Hat thing, although it clearly is tied to the Red Hat technology. But it's even bigger than that. And I think that would be important for the team to understand. >> Yeah so we actually have online is what we call our text stack, and it allows the customer to kind of select the current technologies that we've currently got integrated into our push button infrastructure, and it's always evolving. So I think what we're trying to bring to the table from a technology perspective is our more prescriptive approach. But it's always changing, always evolving. So if customers are wanting to use x or y technology, we're able to integrate with that. But even more so, if you take that technology to the foundation, put a couple of droplets of the Red Hat DNA and the culture is really where that innovation and that inspiration kind of where it's - it's culminating on top of it. So they're building out the applications, like the easiER AG examples. >> John: Yeah, excellent. >> It's great, I always love - By the time we get to the end here, oh I see some of the common threads. You know, for example, Ansible's acquired a year and a half ago, boy we've seen Ansible you know weave it's way into a lot of products. >> Nick: Sure. >> Was talking to Ashush just a sort while ago. And the Open Stack commons, which reflected what you were just talking about is customers are coming, they're sharing their stories. And it's not all Red Hat pieces. One thing I think, I go to a lot of technology shows, and it's usually, "Oh, well we want to talk about solutions." But by these pieces, and Red Hat at it's core it's all open source, and therefore there's always going to be other pieces that tie in. How do you extend as to how much of this is driven by the Red Hat business versus you know the problems of the customer? I'm sure those mesh together pretty well, but maybe some learning you've had over the last year that you could share on that. >> Sure. I think one of the great starting points Stu is what we try and do in every case is start with what we call is a discovery session. So it's one of our consultants, or one of our solution architects really going into the client and having a discussion around what is the business problem we're trying to solve, or what is the business opportunity we're trying to capitalize upon. And from there, you know we have a half day to a day kind of discussion around what these priorities are, and then we come back to them with the deliverable that says okay, here's how we could solve that problem. Now there will be areas that we of course think we have Red Hat technology that absolutely is a perfect fit. We're going to put it in and make that as a recommendation. But there's going to be other technologies that we're also going to recommend as well. And I think that's what we've learned in these Innovation Lab engagements. Because often it's a discussion with IT of course, but also a discussion with line of business. And sometimes what happens in these discovery sessions is sometimes it's the line of business and IT perhaps connecting for the first time on this particular topic. And so we'll come back with that approach and it'll be an approach that's tailored to that customer environment. >> One thing kind of pivots a little bit from the topic of the technology, but I mean the culture and how we're doing this. I mean we are working with ISV's and things of how they could come through the residency to get things spun up into Open Shift commons and get their technology in the Stack or integrated with Red Hat's technical solutions. But on the other hand, you know really when they come in and they work with us, they're driving forward with looking at you know changes of their culture. They're trying to do digital transformation. They're trying to do these different types of things, but working with that cross-functional team. They're coming up with, oh wow, we were solving the problems the wrong way. And that's kind of just the point of the discovery session, figuring out what those business challenges are is really kind of what we're bubbling up with that process. >> Yeah, I'm curious. When I think to just open innovation, even outside of the technology world, sometimes we can learn a lot from people that aren't doing the same kind of things that we've been doing. I know you've got a couple of case studies here, customers sharing their stories, but how do we allow the community to learn more? When they get engaged in the innovation lab are customers sharing a little bit more? We know certain industries are more open to sharing than others, but what are they willing to share? What don't they share? How do you balance that kind of security if you will of their own IP as separate from the processes that they're doing? >> John: Sure. >> It's actually kind of interesting, we had a story this week, we have an engagement going on in our London space, which will be launching in a week and a half. But they're going on right now. And there was a customer that was kind of coming through for a regular executive briefing if you will. And we walked him through the space. And they saw the teams working in there and they were before in the sales kind of meaning, they were a little bit close-minded and close-sourced if you will. Trying to not want to share some of their core nuggets of their IP if you will. And once they saw kind of the collaborative landscape, and this is not even technology based, but just the culture of an open conversation. You know I hate to overuse - you know the sticky notes everywhere, the dev ops. I mean they were really doing a conversation with the customer that was engaging. And all of a sudden the customer that was there on the sales conversation goes, "I want to do this session, I want to go through this discovery session with you guys." And so I think customers are trying to do that. And the other thing is, in our spaces and in our locations, like Boston, we are actually having two team environments, and we've designed it to try and create collisions. So they're basically on two sides, but there's also a common area in the middle where we're trying to create those collisions to inspire that open conversation with our clients as well. Some may be comfortable with it, some might not be as comfortable with it, but we're going to challenge them. >> Nick, I love that term collisions. There's a small conference I go to in Providence. Haven't made it every year, but a few times. It's an innovation conference. And they call it the random collision of unusual suspects. It's the things we can learn from the people we don't know at all. Unfortunately, we're too much. You know, we know the people we know. We know a lot of the same information that we know. If somebody outside of the like three degrees of separation that you might find, that next really amazing thing that will help us move to the next piece, it brings me to my next point. You mentioned London and Boston, how do you decide where you're building your next centers, what's driving that kind of piece of it? And, you know, bring us up to speed as the two new locations, one of which if we had a good arm we might be able to throw a baseball and hit. >> Excellent, so let me just start by first of all saying, you know part of what we're doing here is it's this experiential residency is what it is. And that residency can happen at a client location, at a Red Hat location, or even a pop-up you know kind of third party location. And quite frankly, over the course of the last year, we've done all three of those scenarios. So all three of them are valid. As far as it relates to a Red Hat facility, what we try and do is find a location if we can that's either co-located with a large percentage of Red Hat clients, and or maybe Red Hat engineering. Because oftentimes we'll want to bring some of the engineers into these sessions. So, Mountain View, where we have a center today was a natural 'cause we have some engineering capability out on the west coast. And Boston is of course very natural as well because we have a very large engineering presence here in Boston. In fact, I'll let you talk a little bit about the Boston center 'cause that's going to be our next one that opens here in just a few weeks. So maybe Nick, talk a bit about you know what we're doing in the Boston center, which will be, if you will, our world wide hub for Red Hat innovation. It's not just going to be the Boston center, it's also going to be our world wide hub. >> No pun intended that it's in the hub that is Boston. >> You got it, you got it! >> Excellent. >> So you know, what are we doing in the innovation center, and the engineering center, and the customer briefing center all co-located in Boston. >> Yeah so it's actually going back to the collisions. We've even try and create collisions in our own organization. So it's actually an eight-shaped building. We've got four floors, or two floors on each side. So kind of effectively four floors. Engineering on one side on two floors, and an EBC on a floor above the Open Innovation Labs, and the Open Innovation Labs on the third floor if you will. And there's actually floor cut-outs, so people you know if they're coming in from an executive briefing, they can see down, see what's going on there. And then engineering on the other side. And the point there is that open culture just even within our organization, working with the engineers across the board, getting them over into our space, working with us to solving the problems. And showing, you know, I think the key point that I would hit on there is really trying to inspire customers what it's like to work in a community. So community powered innovation. All those types of things. And so the space is trying to do that. The collisions, the openness obviously, flexibility, but also what we're trying to do is create a platform or a catalyst of innovation. And whether or not it's in the location or pop-up location, we're trying to show the customer some of these principals that we're seeing that's effectively allowing Red Hat to drive the innovation, and how they can take that back into their own. So, you know the locations are great for driving a conversation from a sales perspective, and just overall showcasing it. But the reality is we've got this concept to innovate anywhere. We want to be able to take our technology, our open culture, everything you would want to use and go be able to take that back into your organization. 'Cause our immersive experience is only you know, it's kind of camp for coders or camp for the techies if you will. So you know that's working well, but that's not long term. Long term we have to show them how they can drive it forward, you know with themselves. >> Where do I sign up for the summer program? (all laugh) >> It's coming this summer. >> So Boston will launch in the end of June. >> End of June, early July. >> And the June timeframe we had, I don't know how many dozens of clients, and partners, and Red Hatters go through in hard hat tours this week, here at the Summit. And then in two weeks, we'll open in downtown or really in the heart of London. >> Stu: Alright, yeah, quick flat flight across the pond to get to London. Anything special about that location? >> I think just overall the locations all have a little bit of uniqueness to them. I they're definitely - we did design them to inspire innovation, thinking outside the box. So I think you know, if you go visit one of our locations you might a couple kind of hidden rooms if you will. Some other unique things. But overall, they are just hubs in general for the regions. Hubs of technology and innovation. And so from the go forward perspective I mean we are trying to say, you know, Red Hat is doing things different, thinking different. And these are kind of a way to show it. So trying to find that urban location that is a center point for people to be able to travel in and be able to experience that is really kind of the core. >> So London will open in two weeks, and then we're already working on blueprints for Singapore. >> Singapore, yeah. >> For our Asia hub, and had some great conversations with our leader for Latin America about some very initial plans for Latin America as well. So you know, we'll have great presence across the globe. We'll be able to bring this capability to customer sites. We've already done that. We'll be able to do pop ups. 'Cause even in some cases customers are saying you know we don't want to travel, but we want to get out of our home environment so we can really focus on this and have that immersive experience, and that intimate experience. So we'll do the pop ups as well. >> Driving change, we are seeing that that's the best way. Especially with this kind of, you know, the residency. It is a time box. So if we get them out of their day to day, some of the things, you know, sometimes are the things that are holding them out. Get them in the pop up location, get them outside of their space. All of a sudden their eyes open up. And we had a large retailer, international retailer that we did a project with on the west coast, and getting them out of their space got them coming back. The actual quotes from their executives and the key stakeholders were like they came back fired up. >> Stu: Yeah. >> And they came back motivated to try to make change without our organization. So it's disruption on every level. >> Yeah, you can't underestimate the motivation and the spirit that people come out of these engagements with. It's like a renewed sense of, "I can do this." And we saw that exactly with this retail engagement of really already working on preparing for Black Friday, and putting some great plans in place and really building that out for them. >> John Allessio, Nick Hopman; we always love digging in about the innovation. Absolutely something that excites most people of our industry. That doesn't? Maybe you're in the wrong industry. >> Exactly. >> We've got a couple more interviews. Stay tuned with us. I'm Stu Miniman, you're watching the Cube. (light music)
SUMMARY :
Brought to you by Red Hat. Gentlemen, great to see you again. of the Open Innovation Labs. Open and innovation get, you know, discussed a lot. Red Hat, we really are looking for you to One of the announcements this week was Open Shift IO, It's actually kind of at the core of what we do anyway. for the team to understand. text stack, and it allows the customer to kind of By the time we get to the end here, over the last year that you could share on that. And from there, you know we have a half day to a day But on the other hand, you know really when that aren't doing the same kind of things And all of a sudden the customer that was there We know a lot of the same information that we know. And quite frankly, over the course of the last year, and the engineering center, and the customer briefing center and the Open Innovation Labs on the third floor if you will. And the June timeframe we had, across the pond to get to London. I mean we are trying to say, you know, and then we're already working on blueprints for Singapore. So you know, we'll have great presence across the globe. some of the things, you know, sometimes are And they came back motivated to try to And we saw that exactly with this retail engagement digging in about the innovation. Stay tuned with us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andre | PERSON | 0.99+ |
John Allessio | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Nick | PERSON | 0.99+ |
Dorothy | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Nick Hopman | PERSON | 0.99+ |
two floors | QUANTITY | 0.99+ |
Open Innovation Labs | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
two doctors | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
third floor | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
two clients | QUANTITY | 0.99+ |
end of June | DATE | 0.99+ |
End of June | DATE | 0.99+ |
Austin | LOCATION | 0.99+ |
each side | QUANTITY | 0.99+ |
Singapore | LOCATION | 0.99+ |
Latin America | LOCATION | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Optum | ORGANIZATION | 0.99+ |
early July | DATE | 0.99+ |
one side | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
Providence | LOCATION | 0.99+ |
June | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
easiER AG | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
Red Hatters | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
a year ago | DATE | 0.99+ |
two weeks | QUANTITY | 0.99+ |
Red Hat Summit 2017 | EVENT | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Black Friday | EVENT | 0.99+ |
Open Shift IO | TITLE | 0.99+ |
Mountain View | LOCATION | 0.98+ |
Open Stack Summit | EVENT | 0.98+ |
this year | DATE | 0.98+ |
last year | DATE | 0.98+ |
two team | QUANTITY | 0.98+ |
four floors | QUANTITY | 0.98+ |
both cases | QUANTITY | 0.98+ |
a year and a half ago | DATE | 0.98+ |
DeLisa Alexander, Avni Khatri, Jigyasa Grover, Women In Open Source Winners | Red Hat Summit 2017
>> Announcer: Live, from Boston, Massachusetts, it's The Cube, covering Red Hat Summit 2017. Brought to you by Red Hat. >> Welcome to more of The Cube's coverage of the Red Head Summit 2017, I'm your host, Rebecca Knight. I'm joined today by DeLisa Alexander, she is the Chief People Officer here at Red Hat and then, joining us also, are the women in Open Source Technology winners. We have Jigyasa Grover and we also have Avni Khatri. So congratulations. >> Thank you. >> Thank you. >> I'm looking forward to hearing more about why you were bestowed with this honor but I want to start with you, DeLisa. >> DeLisa: Thank you. >> Why this award? Why did Red Hat feel that highlighting women and what they're doing in Open Source was worthy and we needed to showcase these women? >> Red Hat believes this is incredibly important. We all know that there are not nearly enough females in the technology industry and as the Open Source leader, we felt like we had a responsibility to begin to make a difference in that way. >> So tell us about the process. How do you find these women? How do you then winnow it down to who deserves it? >> So it's community based. It's a power of participation. >> So it's the Open Source way. >> It is the Open Source way. So the nominees come in from whomever would like to make a nomination. We do have a panel of judges that narrow down the nominations so there's five of each, the academic and the community And then we put it out to the community to vote. And so the community selects our award winners. >> Great, okay. So let's start with you, Anvi. So you, you're based here in Cambridge. >> Anvi: I am. >> And you were talking about how you had a five year goal. >> Yes. So, I was working at Yahoo! at the time and my boss at that time had asked us to make one year, five year, and 10 year goals. And in my five year plan, I had listed I wanted to set up computer labs for underserved populations. I wanted to travel, I wanted to see other cultures and I wanted to bring technology to other cultures. And I went to this awesome conference, the Grace Hopper Conference for Women in Computing. >> The Cube has a great partnership and long-term partnership with Grace Hooper. >> Awesome, it's a great conference. I was there and I met ... I reconnected with some folks and I was so inspired by all the women that were there and I came back and I was looking at my goals and I was like, why do I have to wait five years to do this? And I looked online and I saw that someone I had reconnected with, Stormy Peters at Grace Hopper, was running Kids on Computers and so I emailed her and the rest is really history. I found one of my passions in life is to bring technology to people who don't have access to it and doing it with Open Source so that it's accessible to everyone who needs it. >> So tell me about some of the stories, some of the kids that you're working with, and how it is, in fact, changing their lives. I just got back Monday night from a trip to Oaxaca, Mexico for Kids on Computers. We were there for a whole week. But we were setting up computer labs for these local rural communities. Most of them don't have internet. Some of them are now starting to get internet but what we do is we take donated equipment and grant money and Red Hat has also been ... Has awarded Kids on Computers a grant for contributing to some of the labs we set up last week. But we set up two new labs, we took donated equipment and we purchased equipment in country and we worked in the small towns of Antequera and Constitución. Those are actually the school names. We worked in the city of ... It's a suburb of Oaxaca City, Santa Cruz Xoxocotlán and working with them is really enlightening. So, some of the teachers have never used a computer before. Some of the kids have but most of them have not. So just seeing them trying to use a mouse, learning how to do single-click, double-click and going from the point where they haven't used it to the point where they have and where the understand it and getting to the point where one kid is teaching another kid is just really ... Just seeing that makes you feel, like, wow. I've actually made an impact and then, hopefully, by providing accessed technology and also providing access to educational content. So the offline content pieces for schools that don't have internet, working with a partner of Kids on Computers, Internet in a Box, providing offline Wikipedia, Khan Academy, MEDLINE content, offline books, that we give them a pathway to bettering their own lives and bettering the lives of their communities. >> That's really incredible and it will be this really big leveling of the playing field. >> Yes, I hope so. I really hope so and I am hopeful that will come to fruition 'cause I think education is one of the most sustainable ways to improve communities and I think Open Source is an avenue to get them there. >> Thank you. Jigyasa, so you are the academic winner. You are still a college student and with this wonderful award so congratulations. >> Jigyasa: Thank you so much. >> I want to talk to you. So you went to an all-girls high school in India and then got to university in New Delhi and weren't very happy with what you saw when you got to university. Can you tell us a little bit more. >> So I told you what was at the end. What I see is ... I am doing my undergraduation in Computer Science and Technology. In my batch, 80% of them are boys and the rest, girls, and not much interested in pursuing a career in technology, as such. They're pursuing different stuff like arts, designing, or even going for civil services back home. So when I came, I wanted to actually pursue a career in technology and do something apart from cataclysm. Not just books, but do something so that I can apply the concepts somewhere. We were just studying different mordents of software engineering but I wanted to be a part of a team, which actually implements it. So Open Source was the only way because I had internet, I had a good internet connection, I had a laptop and lots of free time. So one day I came across Pharaoh. The name itself fascinated me because it reminded me of Egyptian mummies and all. So that's how I actually got into Pharaoh. I've been contributing to it since three years now and also been apart of different world wide programs like Google Summer of Code and to give back to the community which has helped me so much, starting right from scratch. I tried to meet 13 rich developers and budding programmers through programs, like one of them is Learn IT Girl. So it pairs females, both mentors and mentees, worldwide. So not only do you get to know about technology but you can also know about their culture by being a team and knowing about how it works, how are their working styles and temperaments. Also, I wanted to be a part of something local so that I could interact with them physically so I'm the Director for Delhi Network of Women who Code which has more than 400 plus members back in New Delhi and I organize code labs, teach them, or randomly give pep talks sot that they do not feel bogged down and have enough to look forward to. It's been a pretty exciting journey, as I say. >> It's just beginning. >> And this is the thing is that we are bombarded with headlines about how difficult it is for women in the technology industry because it is such a male-dominated industry. There's a lot of sexism, there's a lot of discrimination, a lot of biases where people just don't put women and technology together. You think of a technologist, you think of an engineer, you think of a guy. So how do you think that these awards, DeLisa, are changing things? What are your hopes and dreams for women in this sector? >> Well, we've come so far in terms of the way we think about supporting women just in our conference alone. And so, I think that when we're really, really successful we won't need this award anymore. But we have a long way to go between now and then. Women like these women are just so inspiring and by sharing their stories and showing what women can do future generations of girls, hopefully, will be inspired to join. Men will understand the contributions that women are making today and it will help really generate the next leaders in Open Source that are women. >> Anvi, five years from now, what do you hope? How many labs do you hope to have opened? What's your grand plan? >> So we have 22 labs right now, which is so exciting, in five countries. >> In how long? >> So, we're eight years old. We were a 501(c)(3) in 2009, so super exciting. So my hope is that ... We are currently focusing in Oaxaca and we just formed a partnership with a local university down there to provide support because, as we know, technology is just one piece of the puzzle. We need the community, we need the support, we need the education pieces along with the technology to really fulfill the project. So my hope is that ... At this point, we've kind of figured out how to deploy one lab at a time and my hope is that now we can do this at scale. That we can work with local universities, governments, and actually get .... Reach out to kids who need it because I think Oaxaca has one of the lowest literacy rates in all of Mexico. This is definitely communities where most of the kids do not go on to high school and definitely most do not go on to college. So if we can make an impact, show the measure, like be able to measure the impact that we're making, longitudinally, I think that then we can grow and we can scale. So, very hopeful. But this is my passion, right. So it's going back to as a woman, how do you find your passion. I think, find what you're passion is and go for it and that makes things so much easier. And I think there's a lot of opportunities for growth and look for people that will support efforts that you're doing, like DeLisa. And Jigyasa, she's mentoring girls already. >> And I think that that's also a great point too. This is the Open Source way because it is about community building and it's about collaboration and that is also, you're doing these things ... The software is a metaphor for what you're doing in life. >> [Jigyasa and Anvi] Yes. >> Jigyasa, what's next for you? So first, graduate from college, that would be >> Yes. (laughing) >> A big priority. But then where do you hope to work? >> Actually, I want to learn lots and travel the world, know more about everything. That's what Jigyasa means. So Jigyasa means curiousity in Hindi and Sanskrit so I hope I live up to my name and the next few years, I just want to keep the learning mode switched on, be curious, and if I want to do something, at least I'll give it a try so that I do not regret that I never gave a try. So always be curious, interact, and give a try. >> Do you want to continue working in technology or do you want to come to the States? Where do you see your career path? My career path, it's like I'm trying to balance everything. I want to learn more theoretically about computer science and technology. Maybe do a Master's degree further and then move on to industry. Also, I am pretty excited about the research work. I've done a couple of them in Europe, Asbarez, and Canada so I want to do something which is a mix of everything so that it keeps me going. >> Do you see ... These are really social initiatives that you're both working on. Do you see that as sort of a real future for Open Source innovation and technology? We know that Open Source is helping companies grow, get more customers, make more money, improve their bottom lines, but we also see it having this big impact on global and social progress. I mean, how untapped is this, where are we in this? Open Source is a way, it's not a technology, it's a way. It's a way of doing things and thinking about the world. Transparency, using the best ideas, innovating rapidly. We have a lot of complex problems to solve, now and in the future. Using the Open Source way, we will solve those problems more rapidly. Whether it's a technology issue or something entirely outside of technology. >> I agree with that completely. Open Source is a mechanism by which we can accomplsih not just technical innovations, but also social innovations. We have to look at it wholistically. We have to look at the ecosystem wholistically. It's not just technology, it's also society, it's also community, education and how do all the puzzle pieces fit together. JeLisa, we talked a little bit about the challenges of recruiting and retaining women in this industry. What is Red Hat doing to get the best and the brightest and the most talented women engineers? Well, we've come a long way. We have a long way to go. The first thing we wanted to do is to create an ecosystem within Red Hat that was very welcoming and inclusive because if you are recruiting people and they come in and they have an experience that isn't positive, they're going to go right out the door. So the most important thing was shoring up our community and creating an environment. So we focused on that, really, in the beginning. Then we started thinking about outreach. Now, the problem is so complex to solve, right. So we started realizing there's not enough people to outreach to. So now our next step has been to start to go deeper into the school systems and start partnering, We have a partnership with BU and also the city of Boston where we supported girls coming from middle school into a lab environment and doing some fun stuff, they get introduced to technology and we're going to keep our eyes on them and we'd like to recreate this type of experience in multiple places so really go deeper in to help create an interest at the middle school age with girls. Because that's what we understand that's when we need to get them interested. >> And that's when research shows confidence falls off and women, young girls, start raising their hands less in class. >> And all that stuff. Yeah, it's such a difficult issue but we hope that we will make a difference by reaching into the pipeline and then certainly retaining. We develop our women, we really focus on that. We want to support them as leaders and so it's the whole pathway. >> And Jigyasa, are you finding that your mentorship is making a difference for the young women you're working with? Young girls? >> It certainly is because even after the program ends I receive messages and emails from girls and boys alike about the program or how they want to build their own product. So, I remember one of the girls from Romania. I mentored her during a program sponsored by Google and all she wanted to build was a website for herself and she's very young. So she used to text me about what technologies she should use and how is it shaping up. Can I test it for her? So I really liked that even after the program ended, she kept up her spirit and is still continuing with it. >> And as DeLisa says, now you got to keep an eye on her and make sure she stays with it and everything. Well, DeLisa, Anvi, Jigyasa, thank you so much for joining us. Congratulations. >> Thank you so much. >> Well-deserved. >> Thank you. >> Thank you. >> This has been Rebecca Knight at the Red Hat Summit in Boston, Massachusetts. We''ll be back with more after this. (electronic beat)
SUMMARY :
Brought to you by Red Hat. of the Red Head Summit 2017, I'm your host, Rebecca Knight. I'm looking forward to hearing more in the technology industry and as the Open Source leader, How do you find these women? So it's community based. So the nominees come in from whomever So let's start with you, Anvi. at the time and my boss with Grace Hooper. and the rest is really history. and getting to the point where one kid That's really incredible and it will be I really hope so and I am hopeful that will come to fruition and with this wonderful award so congratulations. and weren't very happy with what you saw So not only do you get to know about technology So how do you think that these awards, and by sharing their stories and showing what women can do So we have 22 labs right now, which is so exciting, We need the community, we need the support, and that is also, you're doing these things ... Yes. But then where do you hope to work? I just want to keep the learning mode switched on, and then move on to industry. Using the Open Source way, we will and the most talented women engineers? And that's when research shows confidence and so it's the whole pathway. So I really liked that even after the program ended, and make sure she stays with it and everything. at the Red Hat Summit in Boston, Massachusetts.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
DeLisa | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Anvi | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Nick van Wiggeren | PERSON | 0.99+ |
Avni Khatri | PERSON | 0.99+ |
Jigyasa | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
Nick Van Wiggeren | PERSON | 0.99+ |
one year | QUANTITY | 0.99+ |
Mexico | LOCATION | 0.99+ |
Jigyasa Grover | PERSON | 0.99+ |
Cambridge | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Nick | PERSON | 0.99+ |
Valencia | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
Oaxaca | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
New Delhi | LOCATION | 0.99+ |
Romania | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Khan Academy | ORGANIZATION | 0.99+ |
DeLisa Alexander | PERSON | 0.99+ |
March | DATE | 0.99+ |
10 year | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
five year | QUANTITY | 0.99+ |
22 labs | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
eight years | QUANTITY | 0.99+ |
one foot | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
Antequera | LOCATION | 0.99+ |
7,500 people | QUANTITY | 0.99+ |
Monday night | DATE | 0.99+ |
five countries | QUANTITY | 0.99+ |
two new labs | QUANTITY | 0.99+ |
two different ways | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Oaxaca City | LOCATION | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
27 different knobs | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Bobby Patrick - HP Discover Las Vegas 2014 - theCUBE - #HPDiscover
live from Las Vegas Nevada it's the cube at HP discover 2014 brought to you by HP the keynotes this afternoon meg whitman was just on a panel with thomas friedman and intel and satya nadella microsoft and pretty interesting i was it was interesting i'm here with Jeff Rick to note how passionate meg is about politics and government wow I'm she comforted by boat for Bobby Patrick is here we've been drilling down into cloud all day Bobby is the CMO of the HP Cloud Division a lot of new announcements coming on a lot of action and HP Cloud Bobby welcome to the cube yeah thank it's great to be here yeah good to see you so yeah good keynotes good good that was a good refresher you know a lot of these keynotes just products pushing and pushing we had some of that earlier but I thought it was a good eye opening refreshing right kind of discussion so it was very worthwhile but anyway you're relatively new to to HP to run to actually soon how's it going it's great it's exciting i joined it like a great time for the company we were gearing up for the big launch of our new brand HP Helion that that was launched on May seventh so just a little over a month ago and we hit the mark market hard globally it's a complete pull together of all of our products and services around cloud under a single brand customers love it and and it's really reiterated our commitment to OpenStack and you know it's great HP announced the billion dollar commitment to HP Helion over the next two years so it's backed by some some big funding that's a great time to come in so I saw that what is that would help us unpack that billion dollars it was big number right it's popular number right even we aren't buffin right her site Warren Buffett hehe underwrote the whole thing the March Madness right giving away a billion dollars for the perfect bracket right no longer a million does out of the abelian so what is that billion what does it go to what does it comprised yeah I mean it goes 2 r.d where where the most where the most active corporate sponsor behind OpenStack which is the fastest-growing open source project on the planet we are we have more contributors we have more team leads for the different projects and so we're working with the community we're hiring OpenStack experts always looking for the best in the world all around the world and we're then hardening and curating it in making a commercial now with our support and we believe it's the underpinning of the future of what we call hybrid cloud the ability to put some of your information some of your applications with an enterprise some of the public cloud some in different countries that matter for compliance reasons and to be able to move around between those different clouds in a very easy fashion so this money is going to that rd2 skills and to you know truly a global global launch so when you think about the sort of messaging for our HP Cloud what do you want customers to think about in the Helion brand and the HP Cloud yeah the number one thing is commitment to open standards so we are if you heard Martin Fink today talk about HP Labs and they're coming to open source we're all in on open source we believe it's the way to deliver innovation faster we can bring the market tech new technologies faster to customers so we're all into open source we are committed to the projects that matter to the next 20 years of IT and so that could emma has a real though we have to be to prove it with to say you know you can run our software on other hardware we think it'll be we'll have some optimal integrated solutions for you using our entire stack but this is about this is about eliminating vendor lock-in which is one of the biggest challenges at IT departments have faced in the last 20 years and so I think the commitment behind it open is at the core of our messaging so we should mention so Martin fake gave i really liked his presentation i have been safer I don't know for years that HP's got to get back to its roots right which are in fence right and I have not heard until today something that excited me about invention and we saw it today right now invention is not easy all we've talked about a lot that the previous administration cut cut cut by the bone right it takes a long time to turn that's Nisha but but we saw today think was put into that job for very particular reason I said about two things one it's a guy who's going to commercialize inventions answer the marketplace and two there's going to be a heavy systems focus so he basically showed a little leg on the machine which eventually is probably gonna be powering your clouds right he also announced HP is going to put forth a new open source operating system optimized for non-volatile memory not only a blank sheet of paper that they're going to work on with universities but also a Linux derivative stripped-down Linux driven and one for Android that was excited yeah I think what's great also is the cloud business actually falls under market so our our entire business worldwide in our cloud effort our rd on product development is all under martin who runs our CTO of our of our HP labs and when you look at the problems he's addressing with the machine and he's going after it's going after the massive scale challenges of the internet right and the massive scale challenge to the cloud and the day-to-day lose that we're all that we're all facing within the Internet of Things and so you know what's great is by being a part of the labs and being part of Martin's organization you know we're we're injecting that thinking into our cloud we're injecting it into our innovation and and you can see a road map here right you can see this this whole new architecture you talk about architecture that's been in existence since 1950 it was called the von Neumann architecture all the way to now and you know the world with copper at the core you know the world's in need of a new architecture and so it's great to be part of that there's that was a cool talk you talking about electrons photons and ions electrons compute compute autonomy photon photons communicate anions door right and that in essence is the future direction of where HP is going with the machine run a civ massive memory blowing away the volatility hierarchy blowing way ultimately slow spinning disks using memory store right as the platform for future systems I love it yeah he mentioned also but one thing that's close to my heart is the distributed mesh you saw that distributed mesh where we're different different hardware software combinations sit at different points of the you know the net work and they work together you know compute and data and that's really hybrid cloud you know hybrid cloud is putting compute workloads in certain areas and having data stored and distributed for maximum availability and doing that you know with self-service and doing that in a way that you know I see over nations can scale effectively yeah I think that you know as a marketing person you realize that customers want to know that your relevant for their future right and you know as much as I love things like store once it's not the future of computing Ryan comes out HP Labs this potentially is so that's got to have customers really excite this really the first time you've unveiled it right massively in the public scale right maybe you're talking you know that's why that's why i joined the HP i saw that coming out a few months ago and the the new style of IT thinking we're we're saying you know we're radically going to be at the core of helping IT transition from the old style very inward to a customer centric style 21 you know where you're delivering the customer you know consumer experience in the business world and i saw that with HP and it got me excited and i joined on board not upside yeah the other part that Martin mentioned I no idea of the power of HP Labs but the leveraging open source as well which are I probably not a tool in the arsenal not that long ago to really bring the power of a large communities engaged you can attack right specific problems and make that a core piece of the of the process yeah we think about it we've got thousands of the world's best developers right the Millennial developers these guys working all around the clock working on you know our core cloud future called OpenStack contributing to that right including our experts and then we're taking that and then bringing it to market you know into providing that twenty four seven support testing and hardening it you know doing the things need to do to help it enterprise feel comfortable with that decision you could never do that we could never do that and deliver that kind of innovation on our own just couldn't afford it we wouldn't be able to deliver on it you know these are the best minds of the world who are contributing this and we're we're all in nope in fact so you talked about we talked about what the brand is stand for you said open no lock-in can open source innovation occur at a pace with somebody who's got full control of a stack it's much faster actually I mean this is the you watch the innovation of OpenStack it's only what four years old we just at a four-year birthday of OpenStack already that's an entire cloud computing platform you've got databases service projects like trove you've got object storage projects like Swift and block storage like Senator you know all of these things are being worked on by people around the world you could never deliver and so what's happening is the pace of innovation with an open source project like OpenStack is like it's a hockey stick and and so I think yeah I think if we did this ourselves we or anyone else you would never be able to deliver the kind of innovation it's coming to market now we talked about some of the announcements you guys know why don't we actually go back a month right but Helion and then work through today we've got some HPC announcements you got the network you know for Helion right start with Helia so what's great about healing on is is it really brought together a lot of great products and services of the cloud that already existed and it took OpenStack and it was our first foray into the market with an OpenStack distribution and what's important actually is we have technology one called HP cloud system that is actually the most popular cloud platform right now private cloud platform on the planet about almost two thousand users right or two thousand companies third of the Fortune 100 right now using that technology so it is a proven capable platform used by big banks and others we're injecting OpenStack into that so that you can you can over time scale that out with new applications and so the launch really was about pulling all the pieces together pulling our support and services together and saying to a customer you know with confidence here's here's our cloud portfolio and here's how we can take you on a journey it's your pace and accelerate that journey take advantage of that cloud portfolio and that was really the launch month ago today and it discover I mean only a month later we've already done a number of great things but one is we brought out OpenStack the commercial version so we've launched community one you can download it thousands of downloads already the commercial versions coming up now and we announced pricing and what we are all about here this is what it really really important we are about accelerating the adoption of OpenStack throughout the enterprise we're about breaking down the barriers that have that have inhibited the proliferation of this great technology so one of those things today was the price point we announced 1000 for three dollars per year per server all in price point for HP Helion OpenStack and that's critical because this is a scale out a scale-out product you're going to have dozens hundreds maybe even thousands of these all around the world and so the price point is it's disruptive it's the lowest of the planet and and you know we said it's gonna be simple and easy we're not going to do all of this good better best packaging it's it's super easy and that's a big part of today the other part of today as we said you know what we're going to work with partners we're going to deploy this all around the world and that was the helium Network announcement along with ATT and the British Telecom and Intel and that's that's just huge for today now now helium comprises both on-premise in an HP public cloud correct that's right so talk about how that pricing works I mean I like what you're saying simple because cloud pricing is really complicated yeah so we use we wear that we're probably the largest user of OpenStack in production in production today without public cloud so we use it and people can consume services from that buy them on a on a you know on an as you go basis but with OpenStack which you what's really happening is people are able to deploy their own private clouds right they're able to a service provider could deploy and build their own public cloud so when I talk about the price point talking about a customer building their own cloud building their own cloud and a third party data center or in one of HP's 82 data centers and that that price point is is is you know it's easy easy to use you can predict it in your business model and feel comfortable about what it's going to cost you know two three four years out and so help me understand let's unpack that a little bit what am I getting for that fourteen hundred dollars per so you get the entire so this is what's amazing you get the entire cloud operating system called OpenStack right you get all of the projects now that are part of the OpenStack bill you're getting a top you're getting an object story it's it's a you know a la amazon s3 but in a box called Swift right with a swift API and you can build that and do that yourself now you can do that in a way that controls that gives you full control and full flexibility you get databases the service product you get a cute engine with cinder grizzly everything that's right no lad for the computer and so you get all of this in that box all of this and you can go deploy this and you can benefit now from the thousands of developers who are every six weeks putting out new code and innovative so okay so all the new innovations will fall under that umbrella and that's right at any price they choose to use you might say I'm just building a cloud storage environment you might choose to be heavy on Swift that's what you're doing but it is all inclusive and you can use the entire cloud platform or you can build a storage platform or databases a service platform that's a different model clearly what a customer is telling you about that yeah so they well they want they want the control and the flexibility of having their own platform for you know security reasons their own for compliance they want to put their data you know in their own centers but they're also saying I want to use public cloud some too and I like the idea that if OpenStack is here and OpenStack is here right same code bases I can fairly easily take a workload take an application to go from here to here and back and forth that kind of flexibility call interoperability and that's what's coming down the road with OpenStack underneath is something that does not exist today is everybody wants make sure I understand so I'm paid 1400 hours per server for that OpenStack instance on-premise and then when I want to access public cloud services I'm what you would pay an answer you might want to burst you might want to just go do you might have some peak demand he's burst out there you pay for and I would vote for money to make your partner of ours yep excellent now you also had some hpc announcements that's right so there's a number what's great is HP now is people are taking Helion OpenStack and they're putting it in their products are hpc group a high-performance computing group said hey we want to have a self-service mechanism we want to be able to scale out sap architecture people want in that in hpc so they put OpenStack inside their solution and launched it today and so it's you know OpenStack and better than hpc open hybrid simple to consume is what I'm that's right that's right it's ductable and predictable all right good Dave Lisa Marie wrote the book on this so this is great if you don't believe Bobby Lisa came I gave me this right gave me the books it's the OpenStack technology breaking the enterprise barrier you've got it you got it it's one of the best best reads on the planet right now yeah excellent all right so what does it go to the next level what is it I'm just buying computer part of just I'm just getting capacity if you just want capacity you might say you might just build a storage cloud yourself or you might use the our public cloud storage or with our Helia network our partners around the world be deploying OpenStack and you can buy it from them awesome all right we got to leave it there Bobby thanks so much for coming to the cube is a pleasure meantime take it all right keep it right to everybody John furrier is in the house he's back from San Francisco or San Jose good to have him back John keep right there but back with job fair in just a moment
SUMMARY :
of the planet and and you know we said
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
San Francisco | LOCATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
Martin | PERSON | 0.99+ |
British Telecom | ORGANIZATION | 0.99+ |
May seventh | DATE | 0.99+ |
Jeff Rick | PERSON | 0.99+ |
billion dollars | QUANTITY | 0.99+ |
1000 | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Bobby Patrick | PERSON | 0.99+ |
82 data centers | QUANTITY | 0.99+ |
Dave Lisa Marie | PERSON | 0.99+ |
today | DATE | 0.99+ |
thomas friedman | PERSON | 0.99+ |
martin | PERSON | 0.99+ |
Warren Buffett | PERSON | 0.99+ |
four-year | QUANTITY | 0.99+ |
ATT | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Swift | TITLE | 0.99+ |
a month later | DATE | 0.99+ |
Android | TITLE | 0.99+ |
Martin Fink | PERSON | 0.99+ |
hpc | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
billion dollar | QUANTITY | 0.99+ |
Helion | ORGANIZATION | 0.99+ |
Bobby | PERSON | 0.99+ |
Bobby Lisa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
March Madness | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
Ryan | PERSON | 0.98+ |
OpenStack | TITLE | 0.98+ |
Las Vegas Nevada | LOCATION | 0.98+ |
1950 | DATE | 0.97+ |
Nisha | PERSON | 0.97+ |
two thousand companies | QUANTITY | 0.97+ |
HP Helion | ORGANIZATION | 0.97+ |
meg | PERSON | 0.96+ |
Helia | ORGANIZATION | 0.95+ |
month ago | DATE | 0.95+ |
few months ago | DATE | 0.94+ |
HP Labs | ORGANIZATION | 0.94+ |
both | QUANTITY | 0.94+ |
dozens hundreds | QUANTITY | 0.94+ |
OpenStack | ORGANIZATION | 0.93+ |
third | QUANTITY | 0.93+ |
first time | QUANTITY | 0.92+ |
first foray | QUANTITY | 0.92+ |
fourteen hundred dollars per | QUANTITY | 0.91+ |
four years old | QUANTITY | 0.9+ |