Christina Kosmowski, Slack | Girls in Tech Catalyst Conference 2018
>> From San Francisco, it's theCUBE, covering Girls in Tech Catalyst Conference. Brought to you by Girls in Tech. (upbeat music) >> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Downtown San Francisco at Bespoke. It's in the Westfield Shopping Mall, kind of a cool event space up on the fourth floor, and we're at Girls in Tech Catalyst. We were last here a couple years ago in Phoenix, and we're excited to be back. 700 people, really great event, and the program's pretty simple. You've got great women leaders telling their story, and the stories are varied and really cool, and we just got out of Christina's story. She's Christina Kosmowski, global head of customer success at Slack. >> That's right. >> Christina, really good job up there. >> Thank you. >> There was a couple of things I wanted to really kind of jump on that I thought were so important. In the first one you talk about early in your career and raising your hand. When opportunities come up, don't be afraid, raise your hand, go for it. >> Yeah, absolutely. I was always saying, yes to everything. And now I work on saying no to some things. (laughs) >> That's a whole other conversation-- >> I think it's really important that you know there's all those cliches around the fact that you know you've got to go through the window sometimes or you know opportunities are masked and they really are and so just saying yes to everything and really being open to trying new things and learning new experiences will give you opportunities you didn't even realize you had. And so, I always raised my hand, you know, in college to start the soccer team. I raised my hand in my first job to go to Europe and start the London office. I raised my hand to come to Salesforce, at every single point, Salesforce had something new, I said, oh I want to do it and so I was kind of known as the person who always liked to start and build things from scratch. And so, I always wanted to be that yes person and experience these new opportunities. >> And that was huge, I think you said when you started Salesforce, revenue was like 20 million and when you left it was-- >> Almost 10 million, yeah, it's crazy. It was quite a ride, quite a ride. >> But great, cause then you get those opportunities. >> Yeah. >> Another story you were telling which I thought was pretty impactful was, your college soccer experience, you're a soccer player and you know, the difference between putting in your own work and time to achieve something and, you know, nobody ever sees the work that happens when they're not there, but more importantly, bringing along the team. >> Yeah. >> And getting everybody else to buy into your work ethic to raise the performance of the team. I wonder if you can expand on that a little bit. Cause then you said you've used that throughout your career over and over again. >> I have, it was an important lesson. I think, for those that didn't see that speech, I talked about the fact that my freshman year in soccer, it was the first year of the varsity program. We won three games and I was very angry about that and so I spent the next year kind of working my butt off. And so I got to this level but my rest of the team didn't get to the level and so I was able to challenge them to match my level and we were ultimately able to get, you know, into the top six team in the country at the end of my career and that was the first time that I realized it's not just about me. And I've seen that in every step in the way is, I can get there, I can get my idea there, I can work as hard as I can but if I can't empower the team and I can't bring all the cross-functional leaders along with me, we aren't going to achieve what we need to achieve. And at Slack, I've even seen that to be even more of the case, because I've come into a function that's brand new, it started very much as a product-based company versus Salesforce was a little more sales focused. And so it's really important that people understand what our mission is, why it's important, how we can bring these other organizations with us. >> Right, so a great kind of business theme that touched both on Salesforce and at Slack, it's kind of the subscription economy. >> Yes. >> And we've done this conference and we all switched over to our paid Adobe subscription versus trying to find a friend who'll get you a license for a deal at the end of the year. (laughs) But I think the really important thing that you touched on, when you go to subscription economy it really changes the dynamic between you and your customer. And you run customer success. >> I do. >> Because it's not just take the check and send 'em the 15% maintenance bill anymore, now you've got to build a relationship, you've got to deliver value each and every month cause they're paying you each and every month. And so you've translated that into actually building an organization that supports this very different relationship. >> That's right. >> So why don't you tell us, you know, how did that transform? How hard of a sell was that and what's the ultimate outcome with your relationship with the customers? >> I think it's so important to realize that technology is really important, but if we can't apply that into the business setting and to specific outcomes and use cases, it doesn't become valuable over time. And so, we've built an organization that really focuses on customer maturity and value. And so we take it in steps. And so we look at what are those things we can do to give value and outcomes and affect people the way they're working today? And then what does that look like tomorrow, how do we build upon that, and then what does it look like to, they can get to this fully transformed state, and we've done that through a combination of working with product to build features and in-app education, we work with all of our customers to understand what are their needs, we bring people to the table, we bring one to many programs, we've really created this champion network where we are able to allow these peer to peer relationships, and really have this network effect with our customers, and so there's lots of different methods and vehicles that we're doing to really ensure that our customers are getting that outcome. >> Yeah, it's interesting, we cover a lot of the AWS shows and, you know, Jeff Bezos will talk about them just being maniacally customer focused, and lots of companies like to talk about being maniacally customer focused, but most of them are not, they're product focused or they're competitor focused or they're kind of opportunity focused, they're not customer focused. So, how do you build that culture, can you switch if it's not there or does it got to be from the top down at the beginning? >> You can, you can, I think, you know, at Slack, we've been really fortunate it also has that extreme customer focus, but our organization started about 15 months ago, so we brought even more rigor to that, and so there's lots of programs you can do to affect the culture. So, one of the programs we have is a red account program, and one of the things there is really about bringing all the company together to swarm around issues or risks that our customers might have seen, and that's one way that we can start to talk about customer importance. >> What do you call it? >> We call it the customer red account program. >> Red account, so red like treble, because, so you basically-- >> We swarm. >> Swarm, swarm, what a great, swarm meaning a lot of people from a lot of different places. >> Lot of different places, and there's full accountability on all parts of the organization to solve it, because my organization can't solve everything, we're really just the advocates and the facilitators back into, back into Slack, and so that's important that we have that accountability, and we're swarming all around the customer. We have product feedback sessions where we're able to bring that advocacy back, we have a lot of surveys and that promoter score, things where we're measuring and looking for accountability about how we're doing with out customers, and so there's lots of different programs that you can help bring this to light, even in just tactical ways that help ultimately build this culture of customer success. >> See, so like I said, you've got a lot of sniffers in the system to see when you need to call a code red. So, I'm just curious, when you get everyone together, are people surprised where the problems are, is it like, oh, I thought we were doing a great job, and this group's like, no, no, no, you know, you're the problem? >> Sometimes, sometimes, but I think it is really around it being a team effort and really understanding that when issues or challenges expose themselves, there's multiple root causes and you can really understand, okay, part of it could be a product, part of it could be how we supported them, part of it could be in some of our marketing and messaging. And how do we all solve that in a more universal experience? >> All right, last question before I let you go. Just your impressions of the Catalyst today, you said it's your first time here. >> This is my first time here, I am blown away by the energy and excitement and really the quality of speakers and conversations that are happening, I've been hanging around all morning, and just really powerful conversations, and I think I said this in my speech, but we are in a really fortunate time right now, and I think our time is now, and it's so great to see all these women come together, and we, you know, we're the ones that can do this. >> Excellent, we'll see you Amplify later this year. >> Absolutely. >> All right, Christina, well, thanks for stopping by and sharing your story. >> All right, thanks. >> All right, she's Christina, I'm Jeff, you're watching theCUBE, we're at Girls in Tech Catalyst in downtown San Francisco. Thanks for watching. (upbeat music)
SUMMARY :
Brought to you by Girls in Tech. and the program's pretty simple. In the first one you talk about early no to some things. around the fact that you It was quite a ride, quite a ride. you get those opportunities. and you know, the difference I wonder if you can expand And at Slack, I've even seen that to be the subscription economy. that you touched on, when and send 'em the 15% and affect people the way a lot of the AWS shows and, you know, and so there's lots of programs you can do We call it the customer a lot of people from a that you can help bring this to light, to see when you need to call a code red. there's multiple root causes and you can of the Catalyst today, and we, you know, we're the ones Excellent, we'll see you for stopping by and sharing your story. we're at Girls in Tech
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Christina Kosmowski | PERSON | 0.99+ |
Christina | PERSON | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Phoenix | LOCATION | 0.99+ |
15% | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
three games | QUANTITY | 0.99+ |
20 million | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
fourth floor | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
first job | QUANTITY | 0.99+ |
Bespoke | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
London | LOCATION | 0.98+ |
Slack | ORGANIZATION | 0.98+ |
first one | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
Westfield Shopping Mall | LOCATION | 0.97+ |
each | QUANTITY | 0.97+ |
one way | QUANTITY | 0.96+ |
Adobe | ORGANIZATION | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
Girls in Tech Catalyst Conference 2018 | EVENT | 0.94+ |
end | DATE | 0.92+ |
both | QUANTITY | 0.91+ |
Downtown San Francisco | LOCATION | 0.9+ |
Salesforce | ORGANIZATION | 0.89+ |
Girls in Tech | ORGANIZATION | 0.88+ |
couple years ago | DATE | 0.87+ |
later this year | DATE | 0.84+ |
Girls in Tech Catalyst Conference | EVENT | 0.84+ |
about 15 months ago | DATE | 0.84+ |
700 people | QUANTITY | 0.83+ |
Slack | TITLE | 0.82+ |
single point | QUANTITY | 0.79+ |
Almost 10 million | QUANTITY | 0.78+ |
downtown San Francisco | LOCATION | 0.69+ |
every | QUANTITY | 0.6+ |
Salesforce | TITLE | 0.59+ |
the year | DATE | 0.59+ |
Catalyst | ORGANIZATION | 0.59+ |
Girls in Tech Catalyst | ORGANIZATION | 0.58+ |
six team | QUANTITY | 0.56+ |
Girls | ORGANIZATION | 0.55+ |
programs | QUANTITY | 0.48+ |
in | EVENT | 0.43+ |
Tech Catalyst | ORGANIZATION | 0.38+ |
Christina Kosmowski, Slack | Girls in Tech Catalyst Conference 2018
>> From San Francisco, it's theCUBE, covering Girls in Tech Catalyst Conference. Brought to you by Girls in Tech. (upbeat music) >> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Downtown San Francisco at Bespoke. It's in the Westfield Shopping Mall, kind of a cool event space up on the fourth floor, and we're at Girls in Tech Catalyst. We were last here a couple years ago in Phoenix, and we're excited to be back. 700 people, really great event, and the program's pretty simple. You've got great women leaders telling their story, and the stories are varied and really cool, and we just got out of Christina's story. She's Christina Kosmowski, global head of customer success at Slack. >> That's right. >> Christina, really good job up there. >> Thank you. >> There was a couple of things I wanted to really kind of jump on that I thought were so important. In the first one you talk about early in your career and raising your hand. When opportunities come up, don't be afraid, raise your hand, go for it. >> Yeah, absolutely. I was always saying, yes to everything. And now I work on saying no to some things. (laughs) >> That's a whole other conversation-- >> I think it's really important that you know there's all those cliches around the fact that you know you've got to go through the window sometimes or you know opportunities are masked and they really are and so just saying yes to everything and really being open to trying new things and learning new experiences will give you opportunities you didn't even realize you had. And so, I always raised my hand, you know, in college to start the soccer team. I raised my hand in my first job to go to Europe and start the London office. I raised my hand to come to Salesforce, at every single point, Salesforce had something new, I said, oh I want to do it and so I was kind of known as the person who always liked to start and build things from scratch. And so, I always wanted to be that yes person and experience these new opportunities. >> And that was huge, I think you said when you started Salesforce, revenue was like 20 million and when you left it was-- >> Almost 10 billion, yeah, it's crazy. It was quite a ride, quite a ride. >> But great, cause then you get those opportunities. >> Yeah. >> Another story you were telling which I thought was pretty impactful was, your college soccer experience, you're a soccer player and you know, the difference between putting in your own work and time to achieve something and, you know, nobody ever sees the work that happens when they're not there, but more importantly, bringing along the team. >> Yeah. >> And getting everybody else to buy into your work ethic to raise the performance of the team. I wonder if you can expand on that a little bit. Cause then you said you've used that throughout your career over and over again. >> I have, it was an important lesson. I think, for those that didn't see that speech, I talked about the fact that my freshman year in soccer, it was the first year of the varsity program. We won three games and I was very angry about that and so I spent the next year kind of working my butt off. And so I got to this level but my rest of the team didn't get to the level and so I was able to challenge them to match my level and we were ultimately able to get, you know, into the top sixteen in the country at the end of my career and that was the first time that I realized it's not just about me. And I've seen that in every step in the way is, I can get there, I can get my idea there, I can work as hard as I can but if I can't empower the team and I can't bring all the cross-functional leaders along with me, we aren't going to achieve what we need to achieve. And at Slack, I've even seen that to be even more of the case, because I've come into a function that's brand new, it started very much as a product-based company versus Salesforce was a little more sales focused. And so it's really important that people understand what our mission is, why it's important, how we can bring these other organizations with us. >> Right, so a great kind of business theme that touched both on Salesforce and at Slack, it's kind of the subscription economy. >> Yes. >> And we've done this conference and we all switched over to our paid Adobe subscription versus trying to find a friend who'll get you a license for a deal at the end of the year. (laughs) But I think the really important thing that you touched on, when you go to subscription economy it really changes the dynamic between you and your customer. And you run customer success. >> I do. >> Because it's not just take the check and send 'em the 15% maintenance bill anymore, now you've got to build a relationship, you've got to deliver value each and every month cause they're paying you each and every month. And so you've translated that into actually building an organization that supports this very different relationship. >> That's right. >> So why don't you tell us, you know, how did that transform? How hard of a sell was that and what's the ultimate outcome with your relationship with the customers? >> I think it's so important to realize that technology is really important, but if we can't apply that into the business setting and to specific outcomes and use cases, it doesn't become valuable over time. And so, we've built an organization that really focuses on customer maturity and value. And so we take it in steps. And so we look at what are those things we can do to give value and outcomes and affect people the way they're working today? And then what does that look like tomorrow, how do we build upon that, and then what does it look like to, they can get to this fully transformed state, and we've done that through a combination of working with product to build features and in-app education, we work with all of our customers to understand what are their needs, we bring people to the table, we bring one to many programs, we've really created this champion network where we are able to allow these peer to peer relationships, and really have this network effect with our customers, and so there's lots of different methods and vehicles that we're doing to really ensure that our customers are getting that outcome. >> Yeah, it's interesting, we cover a lot of the AWS shows and, you know, Jeff Bezos will talk about them just being maniacally customer focused, and lots of companies like to talk about being maniacally customer focused, but most of them are not, they're product focused or they're competitor focused or they're kind of opportunity focused, they're not customer focused. So, how do you build that culture, can you switch if it's not there or does it got to be from the top down at the beginning? >> You can, you can, I think, you know, at Slack, we've been really fortunate it also has that extreme customer focus, but our organization started about 15 months ago, so we brought even more rigor to that, and so there's lots of programs you can do to affect the culture. So, one of the programs we have is a red account program, and one of the things there is really about bringing all the company together to swarm around issues or risks that our customers might have seen, and that's one way that we can start to talk about customer importance. >> What do you call it? >> We call it the customer red account program. >> Red account, so red like treble, because, so you basically-- >> We swarm. >> Swarm, swarm, what a great, swarm meaning a lot of people from a lot of different places. >> Lot of different places, and there's full accountability on all parts of the organization to solve it, because my organization can't solve everything, we're really just the advocates and the facilitators back into, back into Slack, and so that's important that we have that accountability, and we're swarming all around the customer. We have product feedback sessions where we're able to bring that advocacy back, we have a lot of surveys and that promoter score, things where we're measuring and looking for accountability about how we're doing with out customers, and so there's lots of different programs that you can help bring this to light, even in just tactical ways that help ultimately build this culture of customer success. >> See, so like I said, you've got a lot of sniffers in the system to see when you need to call a code red. So, I'm just curious, when you get everyone together, are people surprised where the problems are, is it like, oh, I thought we were doing a great job, and this group's like, no, no, no, you know, you're the problem? >> Sometimes, sometimes, but I think it is really around it being a team effort and really understanding that when issues or challenges expose themselves, there's multiple root causes and you can really understand, okay, part of it could be a product, part of it could be how we supported them, part of it could be in some of our marketing and messaging. And how do we all solve that in a more universal experience? >> All right, last question before I let you go. Just your impressions of the Catalyst today, you said it's your first time here. >> This is my first time here, I am blown away by the energy and excitement and really the quality of speakers and conversations that are happening, I've been hanging around all morning, and just really powerful conversations, and I think I said this in my speech, but we are in a really fortunate time right now, and I think our time is now, and it's so great to see all these women come together, and we, you know, we're the ones that can do this. >> Excellent, we'll see you Amplify later this year. >> Absolutely. >> All right, Christina, well, thanks for stopping by and sharing your story. >> All right, thanks. >> All right, she's Christina, I'm Jeff, you're watching theCUBE, we're at Girls in Tech Catalyst in downtown San Francisco. Thanks for watching. (upbeat music)
SUMMARY :
Brought to you by Girls in Tech. and the program's pretty simple. In the first one you talk about early no to some things. around the fact that you It was quite a ride, quite a ride. you get those opportunities. and you know, the difference I wonder if you can expand And at Slack, I've even seen that to be the subscription economy. that you touched on, when and send 'em the 15% and affect people the way a lot of the AWS shows and, you know, and so there's lots of programs you can do We call it the customer a lot of people from a that you can help bring this to light, to see when you need to call a code red. there's multiple root causes and you can of the Catalyst today, and we, you know, we're the ones Excellent, we'll see you for stopping by and sharing your story. we're at Girls in Tech
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Christina Kosmowski | PERSON | 0.99+ |
Christina | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Phoenix | LOCATION | 0.99+ |
15% | QUANTITY | 0.99+ |
20 million | QUANTITY | 0.99+ |
three games | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
first time | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
fourth floor | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
first job | QUANTITY | 0.99+ |
Bespoke | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
London | LOCATION | 0.98+ |
Slack | ORGANIZATION | 0.98+ |
first one | QUANTITY | 0.98+ |
first time | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
Westfield Shopping Mall | LOCATION | 0.97+ |
today | DATE | 0.97+ |
one way | QUANTITY | 0.96+ |
Adobe | ORGANIZATION | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
Girls in Tech Catalyst Conference 2018 | EVENT | 0.94+ |
both | QUANTITY | 0.92+ |
end | DATE | 0.92+ |
Downtown San Francisco | LOCATION | 0.9+ |
Salesforce | ORGANIZATION | 0.9+ |
Girls in Tech | ORGANIZATION | 0.88+ |
top sixteen | QUANTITY | 0.87+ |
couple years ago | DATE | 0.87+ |
about 15 months ago | DATE | 0.85+ |
Girls in Tech Catalyst Conference | EVENT | 0.84+ |
700 people | QUANTITY | 0.83+ |
later this year | DATE | 0.82+ |
single point | QUANTITY | 0.79+ |
Slack | TITLE | 0.79+ |
Almost 10 billion | QUANTITY | 0.77+ |
one of | QUANTITY | 0.7+ |
Salesforce | TITLE | 0.64+ |
Catalyst | ORGANIZATION | 0.6+ |
the year | DATE | 0.59+ |
every | QUANTITY | 0.59+ |
Girls in Tech Catalyst | ORGANIZATION | 0.58+ |
Girls in Tech Catalyst | ORGANIZATION | 0.48+ |
programs | QUANTITY | 0.48+ |
Adam Wenchel & John Dickerson, Arthur | AWS Startup Showcase S3 E1
(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI Machine Learning Top Startups Building Generative AI on AWS. This is season 3, episode 1 of the ongoing series covering the exciting startup from the AWS ecosystem to talk about AI and machine learning. I'm your host, John Furrier. I'm joined by two great guests here, Adam Wenchel, who's the CEO of Arthur, and Chief Scientist of Arthur, John Dickerson. Talk about how they help people build better LLM AI systems to get them into the market faster. Gentlemen, thank you for coming on. >> Yeah, thanks for having us, John. >> Well, I got to say I got to temper my enthusiasm because the last few months explosion of interest in LLMs with ChatGPT, has opened the eyes to everybody around the reality of that this is going next gen, this is it, this is the moment, this is the the point we're going to look back and say, this is the time where AI really hit the scene for real applications. So, a lot of Large Language Models, also known as LLMs, foundational models, and generative AI is all booming. This is where all the alpha developers are going. This is where everyone's focusing their business model transformations on. This is where developers are seeing action. So it's all happening, the wave is here. So I got to ask you guys, what are you guys seeing right now? You're in the middle of it, it's hitting you guys right on. You're in the front end of this massive wave. >> Yeah, John, I don't think you have to temper your enthusiasm at all. I mean, what we're seeing every single day is, everything from existing enterprise customers coming in with new ways that they're rethinking, like business things that they've been doing for many years that they can now do an entirely different way, as well as all manner of new companies popping up, applying LLMs to everything from generating code and SQL statements to generating health transcripts and just legal briefs. Everything you can imagine. And when you actually sit down and look at these systems and the demos we get of them, the hype is definitely justified. It's pretty amazing what they're going to do. And even just internally, we built, about a month ago in January, we built an Arthur chatbot so customers could ask questions, technical questions from our, rather than read our product documentation, they could just ask this LLM a particular question and get an answer. And at the time it was like state of the art, but then just last week we decided to rebuild it because the tooling has changed so much that we, last week, we've completely rebuilt it. It's now way better, built on an entirely different stack. And the tooling has undergone a full generation worth of change in six weeks, which is crazy. So it just tells you how much energy is going into this and how fast it's evolving right now. >> John, weigh in as a chief scientist. I mean, you must be blown away. Talk about kid in the candy store. I mean, you must be looking like this saying, I mean, she must be super busy to begin with, but the change, the acceleration, can you scope the kind of change you're seeing and be specific around the areas you're seeing movement and highly accelerated change? >> Yeah, definitely. And it is very, very exciting actually, thinking back to when ChatGPT was announced, that was a night our company was throwing an event at NeurIPS, which is maybe the biggest machine learning conference out there. And the hype when that happened was palatable and it was just shocking to see how well that performed. And then obviously over the last few months since then, as LLMs have continued to enter the market, we've seen use cases for them, like Adam mentioned all over the place. And so, some things I'm excited about in this space are the use of LLMs and more generally, foundation models to redesign traditional operations, research style problems, logistics problems, like auctions, decisioning problems. So moving beyond the already amazing news cases, like creating marketing content into more core integration and a lot of the bread and butter companies and tasks that drive the American ecosystem. And I think we're just starting to see some of that. And in the next 12 months, I think we're going to see a lot more. If I had to make other predictions, I think we're going to continue seeing a lot of work being done on managing like inference time costs via shrinking models or distillation. And I don't know how to make this prediction, but at some point we're going to be seeing lots of these very large scale models operating on the edge as well. So the time scales are extremely compressed, like Adam mentioned, 12 months from now, hard to say. >> We were talking on theCUBE prior to this session here. We had theCUBE conversation here and then the Wall Street Journal just picked up on the same theme, which is the printing press moment created the enlightenment stage of the history. Here we're in the whole nother automating intellect efficiency, doing heavy lifting, the creative class coming back, a whole nother level of reality around the corner that's being hyped up. The question is, is this justified? Is there really a breakthrough here or is this just another result of continued progress with AI? Can you guys weigh in, because there's two schools of thought. There's the, "Oh my God, we're entering a new enlightenment tech phase, of the equivalent of the printing press in all areas. Then there's, Ah, it's just AI (indistinct) inch by inch. What's your guys' opinion? >> Yeah, I think on the one hand when you're down in the weeds of building AI systems all day, every day, like we are, it's easy to look at this as an incremental progress. Like we have customers who've been building on foundation models since we started the company four years ago, particular in computer vision for classification tasks, starting with pre-trained models, things like that. So that part of it doesn't feel real new, but what does feel new is just when you apply these things to language with all the breakthroughs and computational efficiency, algorithmic improvements, things like that, when you actually sit down and interact with ChatGPT or one of the other systems that's out there that's building on top of LLMs, it really is breathtaking, like, the level of understanding that they have and how quickly you can accelerate your development efforts and get an actual working system in place that solves a really important real world problem and makes people way faster, way more efficient. So I do think there's definitely something there. It's more than just incremental improvement. This feels like a real trajectory inflection point for the adoption of AI. >> John, what's your take on this? As people come into the field, I'm seeing a lot of people move from, hey, I've been coding in Python, I've been doing some development, I've been a software engineer, I'm a computer science student. I'm coding in C++ old school, OG systems person. Where do they come in? Where's the focus, where's the action? Where are the breakthroughs? Where are people jumping in and rolling up their sleeves and getting dirty with this stuff? >> Yeah, all over the place. And it's funny you mentioned students in a different life. I wore a university professor hat and so I'm very, very familiar with the teaching aspects of this. And I will say toward Adam's point, this really is a leap forward in that techniques like in a co-pilot for example, everybody's using them right now and they really do accelerate the way that we develop. When I think about the areas where people are really, really focusing right now, tooling is certainly one of them. Like you and I were chatting about LangChain right before this interview started, two or three people can sit down and create an amazing set of pipes that connect different aspects of the LLM ecosystem. Two, I would say is in engineering. So like distributed training might be one, or just understanding better ways to even be able to train large models, understanding better ways to then distill them or run them. So like this heavy interaction now between engineering and what I might call traditional machine learning from 10 years ago where you had to know a lot of math, you had to know calculus very well, things like that. Now you also need to be, again, a very strong engineer, which is exciting. >> I interviewed Swami when he talked about the news. He's ahead of Amazon's machine learning and AI when they announced Hugging Face announcement. And I reminded him how Amazon was easy to get into if you were developing a startup back in 2007,8, and that the language models had that similar problem. It's step up a lot of content and a lot of expense to get provisioned up, now it's easy. So this is the next wave of innovation. So how do you guys see that from where we are right now? Are we at that point where it's that moment where it's that cloud-like experience for LLMs and large language models? >> Yeah, go ahead John. >> I think the answer is yes. We see a number of large companies that are training these and serving these, some of which are being co-interviewed in this episode. I think we're at that. Like, you can hit one of these with a simple, single line of Python, hitting an API, you can boot this up in seconds if you want. It's easy. >> Got it. >> So I (audio cuts out). >> Well let's take a step back and talk about the company. You guys being featured here on the Showcase. Arthur, what drove you to start the company? How'd this all come together? What's the origination story? Obviously you got a big customers, how'd get started? What are you guys doing? How do you make money? Give a quick overview. >> Yeah, I think John and I come at it from slightly different angles, but for myself, I have been a part of a number of technology companies. I joined Capital One, they acquired my last company and shortly after I joined, they asked me to start their AI team. And so even though I've been doing AI for a long time, I started my career back in DARPA. It was the first time I was really working at scale in AI at an organization where there were hundreds of millions of dollars in revenue at stake with the operation of these models and that they were impacting millions of people's financial livelihoods. And so it just got me hyper-focused on these issues around making sure that your AI worked well and it worked well for your company and it worked well for the people who were being affected by it. At the time when I was doing this 2016, 2017, 2018, there just wasn't any tooling out there to support this production management model monitoring life phase of the life cycle. And so we basically left to start the company that I wanted. And John has a his own story. I'll let let you share that one, John. >> Go ahead John, you're up. >> Yeah, so I'm coming at this from a different world. So I'm on leave now from a tenured role in academia where I was leading a large lab focusing on the intersection of machine learning and economics. And so questions like fairness or the response to the dynamism on the underlying environment have been around for quite a long time in that space. And so I've been thinking very deeply about some of those more like R and D style questions as well as having deployed some automation code across a couple of different industries, some in online advertising, some in the healthcare space and so on, where concerns of, again, fairness come to bear. And so Adam and I connected to understand the space of what that might look like in the 2018 20 19 realm from a quantitative and from a human-centered point of view. And so booted things up from there. >> Yeah, bring that applied engineering R and D into the Capital One, DNA that he had at scale. I could see that fit. I got to ask you now, next step, as you guys move out and think about LLMs and the recent AI news around the generative models and the foundational models like ChatGPT, how should we be looking at that news and everyone watching might be thinking the same thing. I know at the board level companies like, we should refactor our business, this is the future. It's that kind of moment, and the tech team's like, okay, boss, how do we do this again? Or are they prepared? How should we be thinking? How should people watching be thinking about LLMs? >> Yeah, I think they really are transformative. And so, I mean, we're seeing companies all over the place. Everything from large tech companies to a lot of our large enterprise customers are launching significant projects at core parts of their business. And so, yeah, I would be surprised, if you're serious about becoming an AI native company, which most leading companies are, then this is a trend that you need to be taking seriously. And we're seeing the adoption rate. It's funny, I would say the AI adoption in the broader business world really started, let's call it four or five years ago, and it was a relatively slow adoption rate, but I think all that kind of investment in and scaling the maturity curve has paid off because the rate at which people are adopting and deploying systems based on this is tremendous. I mean, this has all just happened in the few months and we're already seeing people get systems into production. So, now there's a lot of things you have to guarantee in order to put these in production in a way that basically is added into your business and doesn't cause more headaches than it solves. And so that's where we help customers is where how do you put these out there in a way that they're going to represent your company well, they're going to perform well, they're going to do their job and do it properly. >> So in the use case, as a customer, as I think about this, there's workflows. They might have had an ML AI ops team that's around IT. Their inference engines are out there. They probably don't have a visibility on say how much it costs, they're kicking the tires. When you look at the deployment, there's a cost piece, there's a workflow piece, there's fairness you mentioned John, what should be, I should be thinking about if I'm going to be deploying stuff into production, I got to think about those things. What's your opinion? >> Yeah, I'm happy to dive in on that one. So monitoring in general is extremely important once you have one of these LLMs in production, and there have been some changes versus traditional monitoring that we can dive deeper into that LLMs are really accelerated. But a lot of that bread and butter style of things you should be looking out for remain just as important as they are for what you might call traditional machine learning models. So the underlying environment of data streams, the way users interact with these models, these are all changing over time. And so any performance metrics that you care about, traditional ones like an accuracy, if you can define that for an LLM, ones around, for example, fairness or bias. If that is a concern for your particular use case and so on. Those need to be tracked. Now there are some interesting changes that LLMs are bringing along as well. So most ML models in production that we see are relatively static in the sense that they're not getting flipped in more than maybe once a day or once a week or they're just set once and then not changed ever again. With LLMs, there's this ongoing value alignment or collection of preferences from users that is often constantly updating the model. And so that opens up all sorts of vectors for, I won't say attack, but for problems to arise in production. Like users might learn to use your system in a different way and thus change the way those preferences are getting collected and thus change your system in ways that you never intended. So maybe that went through governance already internally at the company and now it's totally, totally changed and it's through no fault of your own, but you need to be watching over that for sure. >> Talk about the reinforced learnings from human feedback. How's that factoring in to the LLMs? Is that part of it? Should people be thinking about that? Is that a component that's important? >> It certainly is, yeah. So this is one of the big tweaks that happened with InstructGPT, which is the basis model behind ChatGPT and has since gone on to be used all over the place. So value alignment I think is through RLHF like you mentioned is a very interesting space to get into and it's one that you need to watch over. Like, you're asking humans for feedback over outputs from a model and then you're updating the model with respect to that human feedback. And now you've thrown humans into the loop here in a way that is just going to complicate things. And it certainly helps in many ways. You can ask humans to, let's say that you're deploying an internal chat bot at an enterprise, you could ask humans to align that LLM behind the chatbot to, say company values. And so you're listening feedback about these company values and that's going to scoot that chatbot that you're running internally more toward the kind of language that you'd like to use internally on like a Slack channel or something like that. Watching over that model I think in that specific case, that's a compliance and HR issue as well. So while it is part of the greater LLM stack, you can also view that as an independent bit to watch over. >> Got it, and these are important factors. When people see the Bing news, they freak out how it's doing great. Then it goes off the rails, it goes big, fails big. (laughing) So these models people see that, is that human interaction or is that feedback, is that not accepting it or how do people understand how to take that input in and how to build the right apps around LLMs? This is a tough question. >> Yeah, for sure. So some of the examples that you'll see online where these chatbots go off the rails are obviously humans trying to break the system, but some of them clearly aren't. And that's because these are large statistical models and we don't know what's going to pop out of them all the time. And even if you're doing as much in-house testing at the big companies like the Go-HERE's and the OpenAI's of the world, to try to prevent things like toxicity or racism or other sorts of bad content that might lead to bad pr, you're never going to catch all of these possible holes in the model itself. And so, again, it's very, very important to keep watching over that while it's in production. >> On the business model side, how are you guys doing? What's the approach? How do you guys engage with customers? Take a minute to explain the customer engagement. What do they need? What do you need? How's that work? >> Yeah, I can talk a little bit about that. So it's really easy to get started. It's literally a matter of like just handing out an API key and people can get started. And so we also offer alternative, we also offer versions that can be installed on-prem for models that, we find a lot of our customers have models that deal with very sensitive data. So you can run it in your cloud account or use our cloud version. And so yeah, it's pretty easy to get started with this stuff. We find people start using it a lot of times during the validation phase 'cause that way they can start baselining performance models, they can do champion challenger, they can really kind of baseline the performance of, maybe they're considering different foundation models. And so it's a really helpful tool for understanding differences in the way these models perform. And then from there they can just flow that into their production inferencing, so that as these systems are out there, you have really kind of real time monitoring for anomalies and for all sorts of weird behaviors as well as that continuous feedback loop that helps you make make your product get better and observability and you can run all sorts of aggregated reports to really understand what's going on with these models when they're out there deciding. I should also add that we just today have another way to adopt Arthur and that is we are in the AWS marketplace, and so we are available there just to make it that much easier to use your cloud credits, skip the procurement process, and get up and running really quickly. >> And that's great 'cause Amazon's got SageMaker, which handles a lot of privacy stuff, all kinds of cool things, or you can get down and dirty. So I got to ask on the next one, production is a big deal, getting stuff into production. What have you guys learned that you could share to folks watching? Is there a cost issue? I got to monitor, obviously you brought that up, we talked about the even reinforcement issues, all these things are happening. What is the big learnings that you could share for people that are going to put these into production to watch out for, to plan for, or be prepared for, hope for the best plan for the worst? What's your advice? >> I can give a couple opinions there and I'm sure Adam has. Well, yeah, the big one from my side is, again, I had mentioned this earlier, it's just the input data streams because humans are also exploring how they can use these systems to begin with. It's really, really hard to predict the type of inputs you're going to be seeing in production. Especially, we always talk about chatbots, but then any generative text tasks like this, let's say you're taking in news articles and summarizing them or something like that, it's very hard to get a good sampling even of the set of news articles in such a way that you can really predict what's going to pop out of that model. So to me, it's, adversarial maybe isn't the word that I would use, but it's an unnatural shifting input distribution of like prompts that you might see for these models. That's certainly one. And then the second one that I would talk about is, it can be hard to understand the costs, the inference time costs behind these LLMs. So the pricing on these is always changing as the models change size, it might go up, it might go down based on model size, based on energy cost and so on, but your pricing per token or per a thousand tokens and that I think can be difficult for some clients to wrap their head around. Again, you don't know how these systems are going to be used after all so it can be tough. And so again that's another metric that really should be tracked. >> Yeah, and there's a lot of trade off choices in there with like, how many tokens do you want at each step and in the sequence and based on, you have (indistinct) and you reject these tokens and so based on how your system's operating, that can make the cost highly variable. And that's if you're using like an API version that you're paying per token. A lot of people also choose to run these internally and as John mentioned, the inference time on these is significantly higher than a traditional classifi, even NLP classification model or tabular data model, like orders of magnitude higher. And so you really need to understand how that, as you're constantly iterating on these models and putting out new versions and new features in these models, how that's affecting the overall scale of that inference cost because you can use a lot of computing power very quickly with these profits. >> Yeah, scale, performance, price all come together. I got to ask while we're here on the secret sauce of the company, if you had to describe to people out there watching, what's the secret sauce of the company? What's the key to your success? >> Yeah, so John leads our research team and they've had a number of really cool, I think AI as much as it's been hyped for a while, it's still commercial AI at least is really in its infancy. And so the way we're able to pioneer new ways to think about performance for computer vision NLP LLMs is probably the thing that I'm proudest about. John and his team publish papers all the time at Navs and other places. But I think it's really being able to define what performance means for basically any kind of model type and give people really powerful tools to understand that on an ongoing basis. >> John, secret sauce, how would you describe it? You got all the action happening all around you. >> Yeah, well I going to appreciate Adam talking me up like that. No, I. (all laughing) >> Furrier: Robs to you. >> I would also say a couple of other things here. So we have a very strong engineering team and so I think some early hires there really set the standard at a very high bar that we've maintained as we've grown. And I think that's really paid dividends as scalabilities become even more of a challenge in these spaces, right? And so that's not just scalability when it comes to LLMs, that's scalability when it comes to millions of inferences per day, that kind of thing as well in traditional ML models. And I think that's compared to potential competitors, that's really... Well, it's made us able to just operate more efficiently and pass that along to the client. >> Yeah, and I think the infancy comment is really important because it's the beginning. You really is a long journey ahead. A lot of change coming, like I said, it's a huge wave. So I'm sure you guys got a lot of plannings at the foundation even for your own company, so I appreciate the candid response there. Final question for you guys is, what should the top things be for a company in 2023? If I'm going to set the agenda and I'm a customer moving forward, putting the pedal to the metal, so to speak, what are the top things I should be prioritizing or I need to do to be successful with AI in 2023? >> Yeah, I think, so number one, as we talked about, we've been talking about this entire episode, the things are changing so quickly and the opportunities for business transformation and really disrupting different applications, different use cases, is almost, I don't think we've even fully comprehended how big it is. And so really digging in to your business and understanding where I can apply these new sets of foundation models is, that's a top priority. The interesting thing is I think there's another force at play, which is the macroeconomic conditions and a lot of places are, they're having to work harder to justify budgets. So in the past, couple years ago maybe, they had a blank check to spend on AI and AI development at a lot of large enterprises that was limited primarily by the amount of talent they could scoop up. Nowadays these expenditures are getting scrutinized more. And so one of the things that we really help our customers with is like really calculating the ROI on these things. And so if you have models out there performing and you have a new version that you can put out that lifts the performance by 3%, how many tens of millions of dollars does that mean in business benefit? Or if I want to go to get approval from the CFO to spend a few million dollars on this new project, how can I bake in from the beginning the tools to really show the ROI along the way? Because I think in these systems when done well for a software project, the ROI can be like pretty spectacular. Like we see over a hundred percent ROI in the first year on some of these projects. And so, I think in 2023, you just need to be able to show what you're getting for that spend. >> It's a needle moving moment. You see it all the time with some of these aha moments or like, whoa, blown away. John, I want to get your thoughts on this because one of the things that comes up a lot for companies that I talked to, that are on my second wave, I would say coming in, maybe not, maybe the front wave of adopters is talent and team building. You mentioned some of the hires you got were game changing for you guys and set the bar high. As you move the needle, new developers going to need to come in. What's your advice given that you've been a professor, you've seen students, I know a lot of computer science people want to shift, they might not be yet skilled in AI, but they're proficient in programming, is that's going to be another opportunity with open source when things are happening. How do you talk to that next level of talent that wants to come in to this market to supplement teams and be on teams, lead teams? Any advice you have for people who want to build their teams and people who are out there and want to be a coder in AI? >> Yeah, I've advice, and this actually works for what it would take to be a successful AI company in 2023 as well, which is, just don't be afraid to iterate really quickly with these tools. The space is still being explored on what they can be used for. A lot of the tasks that they're used for now right? like creating marketing content using a machine learning is not a new thing to do. It just works really well now. And so I'm excited to see what the next year brings in terms of folks from outside of core computer science who are, other engineers or physicists or chemists or whatever who are learning how to use these increasingly easy to use tools to leverage LLMs for tasks that I think none of us have really thought about before. So that's really, really exciting. And so toward that I would say iterate quickly. Build things on your own, build demos, show them the friends, host them online and you'll learn along the way and you'll have somebody to show for it. And also you'll help us explore that space. >> Guys, congratulations with Arthur. Great company, great picks and shovels opportunities out there for everybody. Iterate fast, get in quickly and don't be afraid to iterate. Great advice and thank you for coming on and being part of the AWS showcase, thanks. >> Yeah, thanks for having us on John. Always a pleasure. >> Yeah, great stuff. Adam Wenchel, John Dickerson with Arthur. Thanks for coming on theCUBE. I'm John Furrier, your host. Generative AI and AWS. Keep it right there for more action with theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
of the AWS Startup Showcase has opened the eyes to everybody and the demos we get of them, but the change, the acceleration, And in the next 12 months, of the equivalent of the printing press and how quickly you can accelerate As people come into the field, aspects of the LLM ecosystem. and that the language models in seconds if you want. and talk about the company. of the life cycle. in the 2018 20 19 realm I got to ask you now, next step, in the broader business world So in the use case, as a the way users interact with these models, How's that factoring in to that LLM behind the chatbot and how to build the Go-HERE's and the OpenAI's What's the approach? differences in the way that are going to put So the pricing on these is always changing and in the sequence What's the key to your success? And so the way we're able to You got all the action Yeah, well I going to appreciate Adam and pass that along to the client. so I appreciate the candid response there. get approval from the CFO to spend You see it all the time with some of A lot of the tasks that and being part of the Yeah, thanks for having us Generative AI and AWS.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Adam Wenchel | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Dickerson | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
3% | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Arthur | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
millions | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
each step | QUANTITY | 0.99+ |
2018 20 19 | DATE | 0.99+ |
two schools | QUANTITY | 0.99+ |
couple years ago | DATE | 0.99+ |
once a week | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
Swami | PERSON | 0.98+ |
four years ago | DATE | 0.98+ |
four | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
Arthur | ORGANIZATION | 0.98+ |
two great guests | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
once a day | QUANTITY | 0.98+ |
six weeks | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
ChatGPT | TITLE | 0.97+ |
second one | QUANTITY | 0.96+ |
three people | QUANTITY | 0.96+ |
front | EVENT | 0.95+ |
second wave | EVENT | 0.95+ |
January | DATE | 0.95+ |
hundreds of millions of dollars | QUANTITY | 0.95+ |
five years ago | DATE | 0.94+ |
about a month ago | DATE | 0.94+ |
tens of millions | QUANTITY | 0.93+ |
today | DATE | 0.92+ |
next 12 months | DATE | 0.91+ |
LangChain | ORGANIZATION | 0.91+ |
over a hundred percent | QUANTITY | 0.91+ |
million dollars | QUANTITY | 0.89+ |
millions of inferences | QUANTITY | 0.89+ |
theCUBE | ORGANIZATION | 0.88+ |
Steven Hillion & Jeff Fletcher, Astronomer | AWS Startup Showcase S3E1
(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI/ML Top Startups Building Foundation Model Infrastructure. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem to talk about data and analytics. I'm your host, Lisa Martin and today we're excited to be joined by two guests from Astronomer. Steven Hillion joins us, it's Chief Data Officer and Jeff Fletcher, it's director of ML. They're here to talk about machine learning and data orchestration. Guys, thank you so much for joining us today. >> Thank you. >> It's great to be here. >> Before we get into machine learning let's give the audience an overview of Astronomer. Talk about what that is, Steven. Talk about what you mean by data orchestration. >> Yeah, let's start with Astronomer. We're the Airflow company basically. The commercial developer behind the open-source project, Apache Airflow. I don't know if you've heard of Airflow. It's sort of de-facto standard these days for orchestrating data pipelines, data engineering pipelines, and as we'll talk about later, machine learning pipelines. It's really is the de-facto standard. I think we're up to about 12 million downloads a month. That's actually as a open-source project. I think at this point it's more popular by some measures than Slack. Airflow was created by Airbnb some years ago to manage all of their data pipelines and manage all of their workflows and now it powers the data ecosystem for organizations as diverse as Electronic Arts, Conde Nast is one of our big customers, a big user of Airflow. And also not to mention the biggest banks on Wall Street use Airflow and Astronomer to power the flow of data throughout their organizations. >> Talk about that a little bit more, Steven, in terms of the business impact. You mentioned some great customer names there. What is the business impact or outcomes that a data orchestration strategy enables businesses to achieve? >> Yeah, I mean, at the heart of it is quite simply, scheduling and managing data pipelines. And so if you have some enormous retailer who's managing the flow of information throughout their organization they may literally have thousands or even tens of thousands of data pipelines that need to execute every day to do things as simple as delivering metrics for the executives to consume at the end of the day, to producing on a weekly basis new machine learning models that can be used to drive product recommendations. One of our customers, for example, is a British food delivery service. And you get those recommendations in your application that says, "Well, maybe you want to have samosas with your curry." That sort of thing is powered by machine learning models that they train on a regular basis to reflect changing conditions in the market. And those are produced through Airflow and through the Astronomer platform, which is essentially a managed platform for running airflow. So at its simplest it really is just scheduling and managing those workflows. But that's easier said than done of course. I mean if you have 10 thousands of those things then you need to make sure that they all run that they all have sufficient compute resources. If things fail, how do you track those down across those 10,000 workflows? How easy is it for an average data scientist or data engineer to contribute their code, their Python notebooks or their SQL code into a production environment? And then you've got reproducibility, governance, auditing, like managing data flows across an organization which we think of as orchestrating them is much more than just scheduling. It becomes really complicated pretty quickly. >> I imagine there's a fair amount of complexity there. Jeff, let's bring you into the conversation. Talk a little bit about Astronomer through your lens, data orchestration and how it applies to MLOps. >> So I come from a machine learning background and for me the interesting part is that machine learning requires the expansion into orchestration. A lot of the same things that you're using to go and develop and build pipelines in a standard data orchestration space applies equally well in a machine learning orchestration space. What you're doing is you're moving data between different locations, between different tools, and then tasking different types of tools to act on that data. So extending it made logical sense from a implementation perspective. And a lot of my focus at Astronomer is really to explain how Airflow can be used well in a machine learning context. It is being used well, it is being used a lot by the customers that we have and also by users of the open source version. But it's really being able to explain to people why it's a natural extension for it and how well it fits into that. And a lot of it is also extending some of the infrastructure capabilities that Astronomer provides to those customers for them to be able to run some of the more platform specific requirements that come with doing machine learning pipelines. >> Let's get into some of the things that make Astronomer unique. Jeff, sticking with you, when you're in customer conversations, what are some of the key differentiators that you articulate to customers? >> So a lot of it is that we are not specific to one cloud provider. So we have the ability to operate across all of the big cloud providers. I know, I'm certain we have the best developers that understand how best practices implementations for data orchestration works. So we spend a lot of time talking to not just the business outcomes and the business users of the product, but also also for the technical people, how to help them better implement things that they may have come across on a Stack Overflow article or not necessarily just grown with how the product has migrated. So it's the ability to run it wherever you need to run it and also our ability to help you, the customer, better implement and understand those workflows that I think are two of the primary differentiators that we have. >> Lisa: Got it. >> I'll add another one if you don't mind. >> You can go ahead, Steven. >> Is lineage and dependencies between workflows. One thing we've done is to augment core Airflow with Lineage services. So using the Open Lineage framework, another open source framework for tracking datasets as they move from one workflow to another one, team to another, one data source to another is a really key component of what we do and we bundle that within the service so that as a developer or as a production engineer, you really don't have to worry about lineage, it just happens. Jeff, may show us some of this later that you can actually see as data flows from source through to a data warehouse out through a Python notebook to produce a predictive model or a dashboard. Can you see how those data products relate to each other? And when something goes wrong, figure out what upstream maybe caused the problem, or if you're about to change something, figure out what the impact is going to be on the rest of the organization. So Lineage is a big deal for us. >> Got it. >> And just to add on to that, the other thing to think about is that traditional Airflow is actually a complicated implementation. It required quite a lot of time spent understanding or was almost a bespoke language that you needed to be able to develop in two write these DAGs, which is like fundamental pipelines. So part of what we are focusing on is tooling that makes it more accessible to say a data analyst or a data scientist who doesn't have or really needs to gain the necessary background in how the semantics of Airflow DAGs works to still be able to get the benefit of what Airflow can do. So there is new features and capabilities built into the astronomer cloud platform that effectively obfuscates and removes the need to understand some of the deep work that goes on. But you can still do it, you still have that capability, but we are expanding it to be able to have orchestrated and repeatable processes accessible to more teams within the business. >> In terms of accessibility to more teams in the business. You talked about data scientists, data analysts, developers. Steven, I want to talk to you, as the chief data officer, are you having more and more conversations with that role and how is it emerging and evolving within your customer base? >> Hmm. That's a good question, and it is evolving because I think if you look historically at the way that Airflow has been used it's often from the ground up. You have individual data engineers or maybe single data engineering teams who adopt Airflow 'cause it's very popular. Lots of people know how to use it and they bring it into an organization and say, "Hey, let's use this to run our data pipelines." But then increasingly as you turn from pure workflow management and job scheduling to the larger topic of orchestration you realize it gets pretty complicated, you want to have coordination across teams, and you want to have standardization for the way that you manage your data pipelines. And so having a managed service for Airflow that exists in the cloud is easy to spin up as you expand usage across the organization. And thinking long term about that in the context of orchestration that's where I think the chief data officer or the head of analytics tends to get involved because they really want to think of this as a strategic investment that they're making. Not just per team individual Airflow deployments, but a network of data orchestrators. >> That network is key. Every company these days has to be a data company. We talk about companies being data driven. It's a common word, but it's true. It's whether it is a grocer or a bank or a hospital, they've got to be data companies. So talk to me a little bit about Astronomer's business model. How is this available? How do customers get their hands on it? >> Jeff, go ahead. >> Yeah, yeah. So we have a managed cloud service and we have two modes of operation. One, you can bring your own cloud infrastructure. So you can say here is an account in say, AWS or Azure and we can go and deploy the necessary infrastructure into that, or alternatively we can host everything for you. So it becomes a full SaaS offering. But we then provide a platform that connects at the backend to your internal IDP process. So however you are authenticating users to make sure that the correct people are accessing the services that they need with role-based access control. From there we are deploying through Kubernetes, the different services and capabilities into either your cloud account or into an account that we host. And from there Airflow does what Airflow does, which is its ability to then reach to different data systems and data platforms and to then run the orchestration. We make sure we do it securely, we have all the necessary compliance certifications required for GDPR in Europe and HIPAA based out of the US, and a whole bunch host of others. So it is a secure platform that can run in a place that you need it to run, but it is a managed Airflow that includes a lot of the extra capabilities like the cloud developer environment and the open lineage services to enhance the overall airflow experience. >> Enhance the overall experience. So Steven, going back to you, if I'm a Conde Nast or another organization, what are some of the key business outcomes that I can expect? As one of the things I think we've learned during the pandemic is access to realtime data is no longer a nice to have for organizations. It's really an imperative. It's that demanding consumer that wants to have that personalized, customized, instant access to a product or a service. So if I'm a Conde Nast or I'm one of your customers, what can I expect my business to be able to achieve as a result of data orchestration? >> Yeah, I think in a nutshell it's about providing a reliable, scalable, and easy to use service for developing and running data workflows. And talking of demanding customers, I mean, I'm actually a customer myself, as you mentioned, I'm the head of data for Astronomer. You won't be surprised to hear that we actually use Astronomer and Airflow to run all of our data pipelines. And so I can actually talk about my experience. When I started I was of course familiar with Airflow, but it always seemed a little bit unapproachable to me if I was introducing that to a new team of data scientists. They don't necessarily want to have to think about learning something new. But I think because of the layers that Astronomer has provided with our Astro service around Airflow it was pretty easy for me to get up and running. Of course I've got an incentive for doing that. I work for the Airflow company, but we went from about, at the beginning of last year, about 500 data tasks that we were running on a daily basis to about 15,000 every day. We run something like a million data operations every month within my team. And so as one outcome, just the ability to spin up new production workflows essentially in a single day you go from an idea in the morning to a new dashboard or a new model in the afternoon, that's really the business outcome is just removing that friction to operationalizing your machine learning and data workflows. >> And I imagine too, oh, go ahead, Jeff. >> Yeah, I think to add to that, one of the things that becomes part of the business cycle is a repeatable capabilities for things like reporting, for things like new machine learning models. And the impediment that has existed is that it's difficult to take that from a team that's an analyst team who then provide that or a data science team that then provide that to the data engineering team who have to work the workflow all the way through. What we're trying to unlock is the ability for those teams to directly get access to scheduling and orchestrating capabilities so that a business analyst can have a new report for C-suite execs that needs to be done once a week, but the time to repeatability for that report is much shorter. So it is then immediately in the hands of the person that needs to see it. It doesn't have to go into a long list of to-dos for a data engineering team that's already overworked that they eventually get it to it in a month's time. So that is also a part of it is that the realizing, orchestration I think is fairly well and a lot of people get the benefit of being able to orchestrate things within a business, but it's having more people be able to do it and shorten the time that that repeatability is there is one of the main benefits from good managed orchestration. >> So a lot of workforce productivity improvements in what you're doing to simplify things, giving more people access to data to be able to make those faster decisions, which ultimately helps the end user on the other end to get that product or the service that they're expecting like that. Jeff, I understand you have a demo that you can share so we can kind of dig into this. >> Yeah, let me take you through a quick look of how the whole thing works. So our starting point is our cloud infrastructure. This is the login. You go to the portal. You can see there's a a bunch of workspaces that are available. Workspaces are like individual places for people to operate in. I'm not going to delve into all the deep technical details here, but starting point for a lot of our data science customers is we have what we call our Cloud IDE, which is a web-based development environment for writing and building out DAGs without actually having to know how the underpinnings of Airflow work. This is an internal one, something that we use. You have a notebook-like interface that lets you write python code and SQL code and a bunch of specific bespoke type of blocks if you want. They all get pulled together and create a workflow. So this is a workflow, which gets compiled to something that looks like a complicated set of Python code, which is the DAG. I then have a CICD process pipeline where I commit this through to my GitHub repo. So this comes to a repo here, which is where these DAGs that I created in the previous step exist. I can then go and say, all right, I want to see how those particular DAGs have been running. We then get to the actual Airflow part. So this is the managed Airflow component. So we add the ability for teams to fairly easily bring up an Airflow instance and write code inside our notebook-like environment to get it into that instance. So you can see it's been running. That same process that we built here that graph ends up here inside this, but you don't need to know how the fundamentals of Airflow work in order to get this going. Then we can run one of these, it runs in the background and we can manage how it goes. And from there, every time this runs, it's emitting to a process underneath, which is the open lineage service, which is the lineage integration that allows me to come in here and have a look and see this was that actual, that same graph that we built, but now it's the historic version. So I know where things started, where things are going, and how it ran. And then I can also do a comparison. So if I want to see how this particular run worked compared to one historically, I can grab one from a previous date and it will show me the comparison between the two. So that combination of managed Airflow, getting Airflow up and running very quickly, but the Cloud IDE that lets you write code and know how to get something into a repeatable format get that into Airflow and have that attached to the lineage process adds what is a complete end-to-end orchestration process for any business looking to get the benefit from orchestration. >> Outstanding. Thank you so much Jeff for digging into that. So one of my last questions, Steven is for you. This is exciting. There's a lot that you guys are enabling organizations to achieve here to really become data-driven companies. So where can folks go to get their hands on this? >> Yeah, just go to astronomer.io and we have plenty of resources. If you're new to Airflow, you can read our documentation, our guides to getting started. We have a CLI that you can download that is really I think the easiest way to get started with Airflow. But you can actually sign up for a trial. You can sign up for a guided trial where our teams, we have a team of experts, really the world experts on getting Airflow up and running. And they'll take you through that trial and allow you to actually kick the tires and see how this works with your data. And I think you'll see pretty quickly that it's very easy to get started with Airflow, whether you're doing that from the command line or doing that in our cloud service. And all of that is available on our website >> astronomer.io. Jeff, last question for you. What are you excited about? There's so much going on here. What are some of the things, maybe you can give us a sneak peek coming down the road here that prospects and existing customers should be excited about? >> I think a lot of the development around the data awareness components, so one of the things that's traditionally been complicated with orchestration is you leave your data in the place that you're operating on and we're starting to have more data processing capability being built into Airflow. And from a Astronomer perspective, we are adding more capabilities around working with larger datasets, doing bigger data manipulation with inside the Airflow process itself. And that lends itself to better machine learning implementation. So as we start to grow and as we start to get better in the machine learning context, well, in the data awareness context, it unlocks a lot more capability to do and implement proper machine learning pipelines. >> Awesome guys. Exciting stuff. Thank you so much for talking to me about Astronomer, machine learning, data orchestration, and really the value in it for your customers. Steve and Jeff, we appreciate your time. >> Thank you. >> My pleasure, thanks. >> And we thank you for watching. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem. I'm your host, Lisa Martin. You're watching theCUBE, the leader in live tech coverage. (upbeat music)
SUMMARY :
of the AWS Startup Showcase let's give the audience and now it powers the data ecosystem What is the business impact or outcomes for the executives to consume how it applies to MLOps. and for me the interesting that you articulate to customers? So it's the ability to run it if you don't mind. that you can actually see as data flows the other thing to think about to more teams in the business. about that in the context of orchestration So talk to me a little bit at the backend to your So Steven, going back to you, just the ability to spin up but the time to repeatability a demo that you can share that allows me to come There's a lot that you guys We have a CLI that you can download What are some of the things, in the place that you're operating on and really the value in And we thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jeff Fletcher | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Steven Hillion | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Conde Nast | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
Airflow | ORGANIZATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
10 thousands | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Electronic Arts | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
two modes | QUANTITY | 0.99+ |
Airflow | TITLE | 0.98+ |
10,000 workflows | QUANTITY | 0.98+ |
about 500 data tasks | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one outcome | QUANTITY | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
GDPR | TITLE | 0.97+ |
SQL | TITLE | 0.97+ |
GitHub | ORGANIZATION | 0.96+ |
astronomer.io | OTHER | 0.94+ |
Slack | ORGANIZATION | 0.94+ |
Astronomer | ORGANIZATION | 0.94+ |
some years ago | DATE | 0.92+ |
once a week | QUANTITY | 0.92+ |
Astronomer | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
last year | DATE | 0.91+ |
Kubernetes | TITLE | 0.88+ |
single day | QUANTITY | 0.87+ |
about 15,000 every day | QUANTITY | 0.87+ |
one cloud | QUANTITY | 0.86+ |
IDE | TITLE | 0.86+ |
Brian Stevens, Neural Magic | Cube Conversation
>> John: Hello and welcome to this cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great conversation on making machine learning easier and more affordable in an era where everybody wants more machine learning and AI. We're featuring Neural Magic with the CEO is also Cube alumni, Brian Steve. CEO, Great to see you Brian. Thanks for coming on this cube conversation. Talk about machine learning. >> Brian: Hey John, happy to be here again. >> John: What a buzz that's going on right now? Machine learning, one of the hottest topics, AI front and center, kind of going mainstream. We're seeing the success of the, of the kind of NextGen capabilities in the enterprise and in apps. It's a really exciting time. So perfect timing. Great, great to have this conversation. Let's start with taking a minute to explain what you guys are doing over there at Neural Magic. I know there's some history there, neural networks, MIT. But the, the convergence of what's going on, this big wave hitting, it's an exciting time for you guys. Take a minute to explain the company and your mission. >> Brian: Sure, sure, sure. So, as you said, the company's Neural Magic and spun out at MIT four plus years ago, along with some people and, and some intellectual property. And you summarize it better than I can cause you said, we're just trying to make, you know, AI that much easier. And so, but like another level of specificity around it is. You know, in the world you have a lot of like data scientists really focusing on making AI work for whatever their use case is. And then the next phase of that, then they're looking at optimizing the models that they built. And then it's not good enough just to work on models. You got to put 'em into production. So, what we do is we make it easier to optimize the models that have been developed and trained and then trying to make it super simple when it comes time to deploying those in production and managing them. >> Brian: You know, we've seen this movie before with the cloud. You start to see abstractions come out. Data science we saw like was like the, the secret art of being like a data scientist now democratization of data. You're kind of seeing a similar wave with machine learning models, foundational models, some call it developers are getting involved. Model complexity's still there, but, but it's getting easier. There's almost like the democratization happening. You got complexity, you got deployment, it's challenges, cost, you got developers involved. So it's like how do you grow it? How do you get more horsepower? And then how do you make developers productive, right? So like, this seems to be the thread. So, so where, where do you see this going? Because there's going to be a massive demand for, I want to do more with my machine learning. But what's the data source? What's the formatting? This kind of a stack develop, what, what are you guys doing to address this? Can you take us through and demystify this, this wave that's hitting, that everyone's seeing? >> Brian: Yeah. Now like you said, like, you know, the democratization of all of it. And that brings me all the way back to like the roots of open source, right? When you think about like, like back in the day you had to build your own tech stack yourself. A lot of people probably probably don't remember that. And then you went, you're building, you're always starting on a body of code or a module that was out there with open source. And I think that's what I equate to where AI has gotten to with what you were talking about the foundational models that didn't really exist years ago. So you really were like putting the layers of your models together in the formulas and it was a lot of heavy lifting. And so there was so much time spent on development. With far too few success cases, you know, to get into production to solve like a business stereo technical need. But as these, what's happening is as these models are becoming foundational. It's meaning people don't have to start from scratch. They're actually able to, you know, the avant-garde now is start with existing model that almost does what you want, but then applying your data set to it. So it's, you know, it's really the industry moving forward. And then we, you know, and, and the best thing about it is open source plays a new dimension, but this time, you know, in the, in the realm of AI. And so to us though, like, you know, I've been like, I spent a career focusing on, I think on like the, not just the technical side, but the consumption of the technology and how it's still way too hard for somebody to actually like, operationalize technology that all those vendors throw at them. So I've always been like empathetic the user around like, you know what their job is once you give them great technology. And so it's still too difficult even with the foundational models because what happens is there's really this impedance mismatch between the development of the model and then where, where the model has to live and run and be deployed and the life cycle of the model, if you will. And so what we've done in our research is we've developed techniques to introduce what's known as sparsity into a machine learning model. It's already been developed and trained. And what that sparsity does is that unlocks by making that model so much smaller. So in many cases we can make a model 90 to 95% smaller, even smaller than that in research. So, and, and so by doing that, we do that in a way that preserves all the accuracy out of the foundational model as you talked about. So now all of a sudden you get this much smaller model just as accurate. And then the even more exciting part about it is we developed a software-based engine called Deep Source. And what that, what the Inference Runtime does is takes that now sparsified model and it runs it, but because you sparsified it, it only needs a fraction of the compute that it, that it would've needed otherwise. So what we've done is make these models much faster, much smaller, and then by pairing that with an inference runtime, you now can actually deploy that model anywhere you want on commodity hardware, right? So X 86 in the cloud, X 86 in the data center arm at the edge, it's like this massive unlock that happens because you get the, the state-of-the-art models, but you get 'em, you know, on the IT assets and the commodity infrastructure. That is where all the applications are running today. >> John: I want to get into the inference piece and the deep sparse you mentioned, but I first have to ask, you mentioned open source, Dave and I with some fellow cube alumnis. We're having a chat about, you know, the iPhone and Android moment where you got proprietary versus open source. You got a similar thing happening with some of these machine learning modules where there's a lot of proprietary things happening and there's open source movement is growing. So is there a balance there? Are they all trying to do the same thing? Is it more like a chip, you know, silicons involved, all kinds of things going on that are really fascinating from a science. What's your, what's your reaction to that? >> Brian: I think it's like anything that, you know, the way we talk about AI you think had been around for decades, but the reality is it's been some of the deep learning models. When we first, when we first started taking models that the brain team was working on at Google and billing APIs around them on Google Cloud where the first cloud to even have AI services was 2015, 2016. So when you think about it, it's really been what, 6 years since like this thing is even getting lift off. So I think with that, everybody's throwing everything at it. You know, there's tons of funded hardware thrown at specialty for training or inference new companies. There's legacy companies that are getting into like AI now and whether it's a, you know, a CPU company that's now building specialized ASEX for training. There's new tech stacks proprietary software and there's a ton of asset service. So it really is, you know, what's gone from nascent 8 years ago is the wild, wild west out there. So there's a, there's a little bit of everything right now and I think that makes sense because at the early part of any industry it really becomes really specialized. And that's the, you know, showing my age of like, you know, the early pilot of the two thousands, you know, red Hat people weren't running X 86 in enterprise back then and they thought it was a toy and they certainly weren't running open source, but you really, and it made sense that they weren't because it didn't deliver what they needed to at that time. So they needed specialty stacks, they needed expensive, they needed expensive hardware that did what an Oracle database needed to do. They needed proprietary software. But what happens is that commoditizes through both hardware and through open source and the same thing's really just starting with with AI. >> John: Yeah. And I think that's a great point before we to call that out because in any industry timing's everything, right? I mean I remember back in the 80s, late 80s and 90s, AI, you know, stuff was going on and it just wasn't, there wasn't enough horsepower, there wasn't enough tech. >> Brian: Yep. >> John: You mentioned some of the processing. So AI is this industry that has all these experts who have been itch scratching that itch for decades. And now with cloud and custom silicon. The tech fundamental at the lower end of the stack, if you will, on the performance side is significantly more performant. It's there you got more capabilities. >> Brian: Yeah. >> John: Now you're kicking into more software, faster software. So it just seems like we're at a tipping point where finally it's here, like that AI moment or machine learning and now data is, is involved. So this is where organizations I see really jumping in with the CEO mandate. Hey team, make ML work for us. Go figure it out. It's got to be an advantage for us. >> Brian: Yeah. >> John: So now they go, okay boss, we will. So what, what do they do? What's the steps does an enterprise take to get machine learning into their organizations? Cause you know, it's coming down from the boards, you know, how does this work for rob? >> Brian: Yeah. Like the, you know, the, what we're seeing is it's like anything, like it's, whether that was source adoption or whether that was cloud adoption, it always starts usually with one person. And increasingly it is the CEO, which realizes they're getting further behind the competition because they're not leaning in, you know, faster. But typically it really comes down to like a really strong practitioner that's inside the organization, right? And, that realizes that the number one goal isn't doing more and just training more models and and necessarily being proprietary about it. It's really around understanding the art of the possible. Something that's grounded in the art of the possible, what, what deep learning can do today and what business outcomes you can deliver, you know, if you can employ. And then there's well proven paths through that. It's just that because of where it's been, it's not that industrialized today. It's very much, you know, you see ML project by ML project is very snowflakey, right? And that was kind of the early days of open source as well. And so, we're just starting to get to the point where it's getting easier, it's getting more industrialized, there's less steps, there's less burdensome on developers, there's less burdensome on, on the deployment side. And we're trying to bring that, that whole last mile by saying, you know what? Deploying deep learning and AI models should be as easy as the as to deploy your application, right? You shouldn't have to take an extra step to deploy an AI model. It shouldn't have to require a new hardware, it shouldn't require a new process, a new DevOps model. It should be as simple as what you're already doing. >> John: What is the best practice for companies to effectively bring an acceptable level of machine learning and performance into their organizations? >> Brian: Yeah, I think like the, the number one start is like what you hinted at before is they, they have to know the use case. They have to, in most cases, you're going to find across every industry you know, that that problem's been tackled by some company, right? And then you have to have the best practice around fine-tuning the models already exist. So fine tuning that existing model. That foundational model on your unique dataset. You, you know, if you are in medical instruments, it's not good enough to identify that it's a medical instrument in the picture. You got to know what type of medical instrument. So there's always a fine tuning step. And so we've created open source tools that make it easy for you to do two things at once. You can fine tune that existing foundational model, whether that's in the language space or whether that's in the vision space. You can fine tune that on your dataset. And at the same time you get an optimized model that comes out the other end. So you get kind of both things. So you, you no longer have to worry about you're, we're freeing you from worrying about the complexity of that transfer learning, if you will. And we're freeing you from worrying about, well where am I going to deploy the model? Where does it need to be? Does it need to be on a device, an edge, a data center, a cloud edge? What kind of hardware is it? Is there enough hardware there? We're liberating you from all of that. Because what you want, what you can count on is there'll always be commodity capability, commodity CPUs where you want to deploy in abundance cause that's where your application is. And so all of a sudden we're just freeing you of that, of that whole step. >> John: Okay. Let's get into deep sparse because you mentioned that earlier. What inspired the creation of deep sparse and how does it differ from any other solutions in the market that are out there? >> Brian: Sure. So, so where unique is it? It starts by, by two things. One is what the industry's pretty good at from the optimization side is they're good at like this thing called quantization, which turns like, you know, big numbers into small numbers, lower precision. So a 32 bit representation of a, of AI weight into a bit. And they're good at like cutting out layers, which also takes away accuracy. What we've figured out is to take those, the industry techniques for those that are best practice, but we combined it with unstructured varsity. So by reducing that model by 90 to 95% in size, that's great because it's made it smaller. But we've taken that when it's the deep sparse engine, when you deploy it that looks at that model and says, because it's so much smaller, I no longer have to run the part of the model that's been essentially sparsified. So what that's done is, it's meant that you no longer need a supercomputer to run models because there's not nearly as much math and processing as there was before the model was optimized. So now what happens is, every CPU platform out there has, has an enormous amount of compute because we've sparsified the rest of it away. So you can pick a, you can pick your, your laptop and you have enough compute to run state-of-the-art models. The second thing that, and you need a software engine to do that cause it ignores the parts of the models. It doesn't need to run, which is what like specialized hardware can't do. The second part is it's then turned into a memory efficiency problem. So it's really around just getting memory, getting the models loaded into the cash of the computer and keeping it there. Never having to go back out to memory. So, so our techniques are both, we reduce the model size and then we only run the part of the model that matters and then we keep it all in cash. And so what that does is it gets us to like these, these low, low latency faster and we're able to increase, you know, the CPU processing by an order magnitude. >> John: Yeah. That low latency is key. And you got developers, you know, co coding super fast. We'll get to the developer angle in a second. I want to just follow up on this, this motivation behind the, the deep sparse because you know, as we were talking earlier before we came on camera about the old days, I mean, not too long ago, virtualization and VMware abstracted away the os from, from the hardware rights and the server virtualization changed the game. >> Brian: Yeah. >> John: And that basically invented cloud computing as we know it today. So, so we see that abstraction. >> Brian: Yeah. >> John: There seems to be a motivation behind abstracting the way the machine learning models away from the hardware. And that seems to be bringing advantages to the AI growth. Can you elaborate on, is that true? And it's, what's your comment? >> Brian: It's true. I think it's true for us. I don't think the industry's there yet, honestly. Cause I think the industry still is of that mindset that if I took, if it took these expensive GPUs to train my model, then I want to run my model on those same expensive GPUs. Because there's often like not a separation between the people that are developing AI and the people that have to manage and deploy at where you need it. So the reality is, is that that's everything that we're after. Like, do we decrease the cost? Yes. Do we make the models smaller? Yes. Do we make them faster? A yes. But I think the most amazing power is that we've turned AI into a docker based microservice. And so like who in the industry wants to deploy their apps the old way on a os without virtualization, without docker, without Kubernetes, without microservices, without service mesh without serverless. You want all those tools for your apps by converting AI models. So they can be run inside a docker container with no apologies around latency and performance cause it's faster. You get the best of that whole world that you just talked about, which is, you know, what we're calling, you know, software delivered AI. So now the AI lives in the same world. Organizations that have gone through that digital cloud transformation with their app infrastructure. AI fits into that world. >> John: And this is where the abstraction concepts matter. When you have these inflection points, the convergence of compute data, machine learning that powers AI, it really becomes a developer opportunity. Because now applications and businesses, when they actually go through the digital transformation, their businesses are completely transformed. There is no IT. Developers are the application. They are the company, right? So AI will be part of whatever business or app will be out there. So there is a application developer angle here. Brian, can you explain >> Brian: Oh completely. >> John: how they're going to use this? Because you mentioned docker container microservice, I mean this really is an insane flipping of the script for developers. >> Brian: Yeah. >> John: So what's that look like? >> Brian: Well speak, it's because like AI's kind of, I mean, again, like it's come so fast. So you figure there's my app team and here's my AI team, right? And they're in different places and the AI team is dragging in specialized infrastructure in support of that as well. And that's not how app developers think. Like they've ran on fungible infrastructure that subtracted and virtualized forever, right? And so what we've done is we've, in addition to fitting into that world that they, that they like, we've also made it simple for them for they don't have to be a machine learning engineer to be able to experiment with these foundational models and transfer learning 'em. We've done that. So they can do that in a couple of commands and it has a simple API that they can either link to their application directly as a library to make difference calls or they can stand it up as a standalone, you know, scale up, scale out inference server. They get two choices. But it really fits into that, you know, you know that world that the modern developer, whether they're just using Python or C or otherwise, we made it just simple. So as opposed to like Go learn something else, they kind of don't have to. So in a way though, it's made it. It's almost made it hard because people expect when we talk to 'em for the first time to be the old way. Like, how do you look like a piece of hardware? Are you compatible with my existing hardware that runs ML? Like, no, we're, we're not. Because you don't need that stack anymore. All you need is a library called to make your prediction and that's it. That's it. >> John: Well, I mean, we were joking on Twitter the other day with someone saying, is AI a pet or a cattle? Right? Because they love their, their AI bots right now. So, so I'd say pet there. But you look at a lot of, there's going to be a lot of AI. So on a more serious note, you mentioned in microservices, will deep sparse have an API for developers? And how does that look like? What do I do? >> Brian: Yeah. >> John: tell me what my, as a developer, what's the roadmap look like? What's the >> Brian: Yeah, it, it really looks, it really can go in both modes. It can go in a standalone server mode where it handles, you know, rest API and it can scale out with ES as the workload comes up and scale back and like try to make hardware do that. Hardware may scale back, but it's just sitting there dormant, you know, so with this, it scales the same way your application needs to. And then for a developer, they basically just, they just, the PIP install de sparse, you know, has one commanded to do an install, and then they do two calls, really. The first call is a library call that the app makes to create the model. And models really already trained, but they, it's called a model create call. And the second command they do is they make a call to do a prediction. And it's as simple as that. So it's, it's AI's as simple as using any other library that the developers are already using, which I, which sounds hard to fathom because it is just so simplified. >> John: Software delivered AI. Okay, that's a cool thing. I believe in it personally. I think that's the way to go. I think there's going to be plenty of hardware options if you look at the advances of cloud players that got more silicon coming out. Yeah. More GPU. I mean, there's more instance, I mean, everything's out there right now. So the question is how does that evolve in your mind? Because that's seems to be key. You have open source projects emerging. What, what path does this take? Is there a parallel mental model that you see, Brian, that is similar? You mentioned open source earlier. Is it more like a VMware virtualization thing or is it more of a cloud thing? Is there Yeah. Is it going to evolve in a, in a trajectory that looks similar to what we might've seen in the past? >> Brian: Yeah, we're, you know, when I, when when I got involved with the company, what I, when I thought about it and I was reasoning about it, like, do you, you know, you want to, like, we all do when you want to join something full-time. I thought about it and said, where will the industry eventually get to? Right? To fully realize the value of, of deep learning and what's plausible as it evolves. And to me, like I, I know it's the old adage of, you know, you know, software, its hardware, cloudy software. But it truly was like, you know, we can solve these problems in software. Like there's nothing special that's happening at the hardware layer and the processing AI. The reality is that it's just early in the industry. So the view that that we had was like, this is eventually the best place where the industry will be, is the liberation of being able to run AI anywhere. Like you're really not democratizing, you democratize the model. But if you can't run the model anywhere you want because these models are getting bigger and bigger with these large language models, then you're kind of not democratizing. And if you got to go and like by a cluster to run this thing on. So the democratization comes by if all of a sudden that model can be consumed anywhere on demand without planning, without provisioning, wherever infrastructure is. And so I think that's with or without Neural Magic, that's where the industry will go and will get to. I think we're the leaders, leaders in getting it there. It's right because we're more advanced on these techniques. >> John: Yeah. And your background too. You've seen OpenStack, pre-cloud, you saw open source grow and still exponentially growing. And so you have the same similar dynamic with machine learning models growing. And they're also segmenting into almost a, an ML stack or foundational model as we talk about. So you're starting to see the formation of tooling inference. So a lot of components coming. It's almost a stack, it's almost a, it literally is like an operating system problem space, you know? How do you run things, how do you link things? How do you bring things together? Is that what's going on here? Is this like a data modeling operating environment kind of red hat type thing going on? Like. >> Brian: Yeah. Yeah. Like I think there is, you know, I thought about that too. And I think there is the role of like distribution, because the industrialization not happening fast enough of this. Like, can I go back to like every customers, every, every user does it in their own kind of way. Like it's not, everyone's a little bit of a snowflake. And I think that's okay. There's definitely plenty of companies that want to come in and say, well, this is the way it's going to be and we industrialize it as long as you do it our way. The reality is technology doesn't get industrialized by one company just saying, do it our way. And so that's why like we've taken the approach through open source by saying like, Hey, you haven't really industrialized it if you said. We made it simple, but you always got to run AI here. Yeah, right. You only like really industrialize it if you break it down into components that are simple to use and they work integrated in the stack the way you want them to. And so to me, that first principles was getting thing into microservices and dockers that could be run on VMware, OpenShare on the cloud in the edge. And so that's the, that's the real part that we're happening with. The other part, like I do agree, like I think it's going to quickly move into less about the model. Less about the training of the model and the transfer learning, you know, the data set of the model. We're taking away the complexity of optimization. Giving liberating deployment to be anywhere. And I think the last mile, John is going to be around the ML ops around that. Because it's easy to think of like soft now that it's just a software problem, we've turned it into a software problem. So it's easy to think of software as like kind of a point release, but that's not the reality, right? It's a life cycle. And it's, and so I think ML very much brings in the what is the lifecycle of that deployment? And, you know, you get into more interesting conversations, to be honest than like, once you've deployed in a docking container is around like model drift and accuracy and the dataset changes and the user changes is how do you become from an ML perspective of where of that sending signal back retraining. And, and that's where I think a lot of the, in more of the innovation's going to start to move there. >> John: Yeah. And software also, the software problem, the software opportunity as well is developer focused. And if you look at the cloud native landscape now, similar stacks developing a lot of components. A lot of things to, to stitch together a lot of things that are automating under the hood. A lot of developer productivity conversations. I think this is going to go down that same road. I want to get your thoughts because developers will set the pace. And this is something that's clear in this next wave developer productivity. They're the defacto standards bodies. They will decide what microservices check, API check. Now, skill gap is going to be a problem because it's relatively new. So model sprawl, model sizes, proprietary versus open. There has to be a way to kind of crunch that down into a, like a DevOps, like just make it, get the developer out of the, the muck. So what's your view? Are we early days like that? Or what's the young kid in college studying CS or whatever degree who comes into this with, with both feet? What are they doing? >> Brian: I'll probably say like the, the non-popular answer to that. A little bit is it's happening so fast that it's going to get kind of boring fast. Meaning like, yeah, you could go to school and go to MIT, right? Sorry. Like, and you could get a hold through end like becoming a model architect, like inventing the next model, right? And the layers and combining 'em and et cetera, et cetera. And then what operators and, and building a model that's bigger than the last one and trains faster, right? And there will be those people, right? That actually, like they're building the engines the same way. You know, I grew up as an infrastructure software developer. There's not a lot of companies that hire those anymore because they're all sitting inside of three big clouds. Yeah. Right? So you better be a good app developer, but I think what you're going to see is before you had to be everything, you had to be the, if you were going to use infrastructure, you had to know how to build infrastructure. And I think the same thing's true around is quickly exiting ML is to be able to use ML in your company, you better be like, great at every aspect of ML, including every intricacy inside of the model and every operation's doing, that's quickly changing. Like, you're going to start with a starting point. You know, in the future you're not going to be like cracking open these GPT models, you're going to just be pulling them off the shelf, fine tuning 'em and go. You don't have to invent it. You don't have to understand it. And I think that's going to be a pivot point, you know, in the industry between, you know, what's the future? What's, what's the future of a, a data scientist? ML engineer researcher look like? >> John: I think that's, the outcome's going to be determined. I mean, you mentioned, you know, doing it yourself what an SRE is for a Google with the servers scale's huge. So yeah, it might have to, at the beginning get boring, you get obsolete quickly, but that means it's progressing. So, The scale becomes huge. And that's where I think it's going to be interesting when we see that scale. >> Brian: Yep. Yeah, I think that's right. I think that's right. And we always, and, and what I've always said, and much the, again, the distribute into my ML team is that I want every developer to be as adept at being able take advantage of ML as non ML engineer, right? It's got to be that simple. And I think, I think it's getting there. I really do. >> John: Well, Brian, great, great to have you on theCUBE here on this cube conversation. As part of the startup showcase that's coming up. You're going to be featured. Or your company would featured on the upcoming ABRA startup showcase on making machine learning easier and more affordable as more machine learning models come in. You guys got deep sparse and some great technology. We're going to dig into that next time. I'll give you the final word right now. What do you see for the company? What are you guys looking for? Give a plug for the company right now. >> Brian: Oh, give a plug that I haven't already doubled in as the plug. >> John: You're hiring engineers, I assume from MIT and other places. >> Brian: Yep. I think like the, the biggest thing is like, like we're on the developer side. We're here to make this easy. The majority of inference today is, is on CPUs already, believe it or not, as much as kind of, we like to talk about hardware and specialized hardware. The majority is already on CPUs. We're basically bringing 95% cost savings to CPUs through this acceleration. So, but we're trying to do it in a way that makes it community first. So I think the, the shout out would be come find the Neural Magic community and engage with us and you'll find, you know, a thousand other like-minded people in Slack that are willing to help you as well as our engineers. And, and let's, let's go take on some successful AI deployments. >> John: Exciting times. This is, I think one of the pivotal moments, NextGen data, machine learning, and now starting to see AI not be that chat bot, just, you know, customer support or some basic natural language processing thing. You're starting to see real innovation. Brian Stevens, CEO of Neural Magic, bringing the magic here. Thanks for the time. Great conversation. >> Brian: Thanks John. >> John: Thanks for joining me. >> Brian: Cheers. Thank you. >> John: Okay. I'm John Furrier, host of theCUBE here in Palo Alto, California for this cube conversation with Brian Stevens. Thanks for watching.
SUMMARY :
CEO, Great to see you Brian. happy to be here again. minute to explain what you guys in the world you have a lot So it's like how do you grow it? like back in the day you had and the deep sparse you And that's the, you know, late 80s and 90s, AI, you know, It's there you got more capabilities. the CEO mandate. Cause you know, it's coming the as to deploy your application, right? And at the same time you get in the market that are out meant that you no longer need a the deep sparse because you know, John: And that basically And that seems to be bringing and the people that have to the convergence of compute data, insane flipping of the script But it really fits into that, you know, But you look at a lot of, call that the app makes to model that you see, Brian, the old adage of, you know, And so you have the same the way you want them to. And if you look at the to see is before you had to be I mean, you mentioned, you know, the distribute into my ML team great to have you on theCUBE already doubled in as the plug. and other places. the biggest thing is like, of the pivotal moments, Brian: Cheers. host of theCUBE here in Palo Alto,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Stevens | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
32 bit | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Brian Steve | PERSON | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two calls | QUANTITY | 0.99+ |
both things | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
first call | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both feet | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
both modes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
80s | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
second command | QUANTITY | 0.98+ |
Bassam Tabbara, Upbound | CloudNativeSecurityCon 23
(upbeat music) >> Hello and welcome back to theCUBE's coverage of Cloud Native SecurityCon North America 2023. Its first inaugural event. It's theCUBE's coverage. We were there at the first event for a KubeCon before CNCF kind of took it over. It was in Seattle. And so in Seattle this week is Cloud Native SecurityCon. Of course, theCUBE is there covering via our Palo Alto Studios and our experts around the world who are bringing in Bassam Tabbara who's the CEO and founder of upbound.io. That's the URL, but Upbound is the company. The creators of Crossplane. Really kind of looking at the Crossplane, across the abstraction layer, across clouds. A big part of, as we call supercloud trend. Bassam, great to see you. You've been legend in the open source community. Great to have you on. >> Thanks, John. Always good to be on theCUBE. >> I really wanted to bring you in 'cause I want to get your perspective. You've seen the movie, you've seen open source software grow, it continues to grow. Now you're starting to see the Linux Foundation, which has CNCF really expanding their realm. They got the CloudNativeCon, KubeCon, which is Kubernetes event. That's gotten so massive and so successful. We've been to every single one as you know. I've seen you there and all of them as well. So that's going great. Now they got this new event that's spins out dedicated to security. Everybody wants to know why the new event? What's the focus? Is it needed? What will they do? What's different from KubeCon? Where do I play? And so there's a little bit of a question mark in the ecosystem around this event. And so we've been reporting on it. Looking good so far. People are buzzing, again, they're keeping it small. So that kind of managing expectations like any good event would do. But I think it's been successful, which I wanted like to get your take on how you see it. Is this good? Are you indifferent? Are you excited by this? What's your take? >> I mean, look, it's super exciting to see all the momentum around cloud native. Obviously there are different dimensions of cloud native securities, an important piece. Networking, storage, compute, like all those things I think tie back together and in some ways you can look at this event as a focused event on the security aspect as it relates to cloud native. And there are lots of vendors in this space. There's lots of interesting projects in the space, but the unifying theme is that they come together and probably around the Kubernetes API and the momentum around cloud native and with Kubernetes at the center of it. >> On the focus on Kubernetes, it seems this event is kind of classic security where you want to have deep dives. Again, I call it the event operating system 'cause you decouple, make things highly cohesive, and you link them together. I don't see a problem with it. I kind of like this. I gave it good reviews if they stay focused because security is super critical. There was references to bind and DNS. There's a lot of things in the infrastructure plumbing that need to be looked at or managed or figured out or just refactored for modernization needs. And I know you've done a lot with storage, for instance, storage, networking, kernel. There's a lot of things in the old tech or tech in the cloud that needs to be kind, I won't say rebooted, but maybe reset or jump. Do you see it that way? Are there things that need to get done or is it just that there's so much complexity in the different cloud cluster code thing going on? >> It's obviously security is a very, very big space and there are so many different aspects of it that people you can go into. I think the thing that's interesting around the cloud native community is that there is a unifying theme. Like forget the word cloud native for a second, but the unifying theme is that people are building around what looks like a standardized play around Kubernetes and the Kubernetes API. And as a result you can recast a lot of the technologies that we are used to in the past in a traditional security sense. You can recast them on top of this new standardized approach or on Kubernetes, whether it's policy or protecting a supply chain or scanning, or like a lot of the access control authorization, et cetera. All of those things can be either revived to apply to this cloud native play and the Kubernetes play or creating new opportunities for companies to actually build new and interesting projects and companies around a standardized play. >> Do you think this also will help the KubeCon be more focused around the developer areas there and just touching on security versus figuring out how to take something so important in KubeCon, which the stakeholders in KubeCon have have grown so big, I can see security sucking a lot of oxygen out of the room there. So here you move it over, you keep it over here. Will anything change on the KubeCon site? We'll be there in in Amsterdam in April. What do you think the impact will be? Good? Is it good for the community? Just good swim lanes? What's your take? >> Yeah, I still think KubeCon will be an umbrella event for the whole cloud native community. I suspect that you'll see some of the same vendors and projects and everything else represented in KubeCon. The way I think about all the branched cloud native events are essentially a way to have a more focused discussion, get people together to talk about security topics or networking topics or things that are more focused way. But I don't think it changes the the effect of KubeCon being the umbrella around all of it. So I think you'll see the same presence and maybe larger presence going forward at Amsterdam. We're planning to be there obviously and I'm excited to be there and I think it'll be a big event and having a smaller event is not going to diminish the effect of KubeCon. >> And if you look at the developer community they've all been online for a long time, from IRC chat to now Slack and now new technologies and stuff like Discord out there. The event world has changed post-pandemic. So it makes sense. And we're seeing this with all vendors, by the way, and projects. The digital community angle is huge because if you have a big tent event like KubeCon you can make that a rallying moment in the industry and then have similar smaller events that are highly focused that build off that that are just connective tissue or subnets, if you will, or communities targeted for really deeper conversations. And they could be smaller events. They don't have to be monster events, but they're connected and traverse into the main event. This might be the event format for the future for all companies, whether it's AWS or a company that has a community where you create this network effect, if you will, around the people. >> That's right. And if you look at things like AWS re:Invent, et cetera, I mean, that's a massive events. And in some ways it, if it was a set of smaller sub events, maybe it actually will flourish more. I don't know, I'm not sure. >> They just killed the San Francisco event. >> That's right. >> But they have re:Inforce, all right, so they just established that their big events are re:Invent and re:Inforce as their big. >> Oh, I didn't hear about re:Inforce. That's news to me. >> re:Inforce is their third event. So they're doing something similar as CloudNativeCon, which is you have to have an event and then they're going to create a lot of sub events underneath. So I think they are trying to do that. Very interesting. >> Very interesting for sure. >> So let's talk about what you guys are up to. I know from your standpoint, you had a lot of security conversations. How is Crossplane doing? Obviously, you saw our Supercloud coverage. You guys fit right into that model where clients, customers, enterprises are going to want to have multiple cloud operating environments for whatever the use case, whether you're using ChatGPT, you got to get an Azure instance up and running for that. Now with APIs, we're hearing a lot of developers doing that. So you're going to start to see this cross cloud as VMware calls, what we call it supercloud. There's more need for Crossplane like thinking. What's the update? >> For sure, and we see this very clearly as well. So the fact that there is a standardization layer, there is a layer that lets you converge the different vendors that you have, the different clouds that you have, the different hype models that you have, whether it's hybrid or private, public, et cetera. The unifying theme is that you're literally bringing all those things under one control plane that enables you to actually centralize and standardize on security, access control, helps you standardize on cost control, quota policy, as well as create a self-service experience for your developers. And so from a security standpoint, the beauty of this is like, you could use really popular projects like open policy agent or Kyverno or others if you want to do policy and do so uniformly across your entire stack, your entire footprint of tooling, vendors, services and across deployment models. Those things are possible because you're standardizing and consolidating on a control plane on top of all. And that's the thing that gets our customers excited. That we're seeing in the community that they could actually now normalize standardize on small number of projects and tools to manage everything. >> We were talking about that in our summary of the keynote yesterday. Dave Vellante and I were talking about the idea of clients want to have a redo of their security. They've been, just the tooling has been building up. They got zero trust in place, maybe with some big vendor, but now got the cloud native opportunity to refactor and reset and reinvent their security paradigm. And so that's the positive thing we're hearing. Now we're seeing enterprises want this cross cloud capabilities or Crossplane like thinking that you guys are talking about. What are your customers telling you? Can you share from an enterprise perspective where they're at in this journey? Because part of the security problems that we've been reporting on has been because clients are moving from IT to cloud native and not everyone's moved over yet. So they're highly vulnerable to ransomware and all kinds of other crap. So another attacks, so they're wide open, But people who are moving into cloud native, are they stepping up their game on this Crossplane opportunity? Where are they at? Can you share data on that? >> Yeah, we're grateful to be talking to a lot of customers these days. And the interesting thing is even if you talked about large financial institutions, banks, et cetera, the common theme that we hear is that they bought tools for each of the different departments and however they're organized. Sometimes you see the folks that are running databases, networking, being separated from say, the computer app developers or they're all these different departments within an organization. And for each one of those, they've made localized decisions for tooling and services that they bought. What we're seeing now consistently is that they're all together, getting together, and trying to figure out how to standardize on a smaller one set of tooling and services that goes across all the different departments and all different aspects of the business that they're running. And this is where this discussion gets a lot very interesting. If instead of buying a different policy tool for each department, or once that fits it you could actually standardize on policy or the entire footprint of services that they're managing. And you get that by standardizing on a control plane or standardizing on effectively one point of control for everything that they're doing. And that theme is like literally, it gets all our customers excited. This is why they're engaging in all of this. It's almost the holy grail. The thing that I've been trying to do for a long time. >> I know. >> And it's finally happening. >> I know you and I have talked about this many times, but I got to ask you the one thing that jumps into everybody's head when you hear control plane is lock-in. So how do you discuss that lock-in, perception from the reality of the situation? How do you unpack that for the customer? 'Cause they want choice at the end of the day. There's the preferred vendors for sure on the hyperscale side and app side and open source, but what's the lock-in? What does the lock-in conversation look like? Or do they even have that conversation? >> Yeah. To be honest, I mean, so their lock-in could be a two dimensions here. Most of our customers and people are using Crossplane or using app on product around it. Most of our do, concentrated in, say a one cloud vendor and have others. So I don't think this is necessarily about multicloud per se or being locked into one vendor. But they do manage many different services and they have legacy tooling and they have different systems that they bought at different stages and they want to bring them all together. And by bringing them all together that helps them make choices about consulting or even replacing some of them. But right now everything is siloed, everything is separate, both organizationally as well as the code bases or investments and tooling or contracts. Everything is just completely separated and it requires humans to put them together. And organizations actually try to gather around and put them together. I don't know if lock-in is the driving goal for this, but it is standardization consolidation. That's the driving initiative. >> And so unification and building is the big driver. They're building out >> Correct, and you can ask why are they doing that? What does standardization help with? It helps them to become more productive. They can move faster, they can innovate faster. Not as a ton of, like literally revenue written all over. So it's super important to them that they achieved this, increase their pace of innovation around this and they do that by standardizing. >> The great point in all this and your success at Upbound and now CNCF success with KubeCon + CloudNativeCon and now with the inaugural event of Cloud Native SecurityCon is that the customers are involved, a lot of end users are involved. There's a big driver not only from the industry and the developers and getting architecture right and having choice. The customers want this to happen. They're leaning in, they're part of it. So that's a big driver. Where does this go? If you had to throw a dart at the board five years from now Cloud Native SecurityCon, what does it look like if you had to predict the trajectory of this event and community? >> Yeah, I mean, look, I think the trajectory one is that we have what looks like a standardization layer emerging that is all encompassing. And as a result, there is a ton of opportunity for vendors, projects, communities to build around within on top of this layer. And essentially create, I think you talked about an operating system earlier and decentralized aspect of this, but it's an opportunity to actually, what it looks like for the first time we have a convergence happening industry-wide and through open source and open source foundations. And I think that means that there'll be new opportunity and lots of new projects and things that are created in the space. And it also means that if you don't attach this space, you'll likely be left out. >> Awesome. Bassam, great to have you on, great expert commentary, obviously multi CUBE alumni and supporter of theCUBE and as you become successful we really appreciate your support for helping us get the content out there. And best of luck to your team and thanks for weighing in on Cloud Native SecurityCon. >> Awesome. It's always good talking to you, John. Thank you. >> Great stuff. This is more CUBE coverage from Palo Alto, getting folks on the ground on location, getting us the stories in Seattle. Of course, Cloud Native SecurityCon, the inaugural event, which looks like will be the beginning of a series of multi-year journey for the CNCF, focusing on security. Of course, theCUBE's here to cover it, every angle of it, and extract the signal from the noise. I'm John Furrier, thanks for watching. (upbeat music)
SUMMARY :
Really kind of looking at the Crossplane, Always good to be on theCUBE. in the ecosystem around this event. and probably around the Kubernetes API Again, I call it the a lot of the technologies that Is it good for the community? for the whole cloud native community. for the future for all companies, And if you look at things They just killed the that their big events are That's news to me. and then they're going to create What's the update? the different clouds that you have, And so that's the positive for each of the different departments but I got to ask you the one thing That's the driving initiative. building is the big driver. Correct, and you can ask and the developers and I think you talked about and as you become successful good talking to you, John. and extract the signal from the noise.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
April | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
yesterday | DATE | 0.99+ |
Cloud Native SecurityCon | EVENT | 0.99+ |
two dimensions | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
third event | QUANTITY | 0.99+ |
Bassam | PERSON | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
first event | QUANTITY | 0.99+ |
one vendor | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Bassam Tabbara | PERSON | 0.98+ |
this week | DATE | 0.98+ |
San Francisco | LOCATION | 0.98+ |
each department | QUANTITY | 0.98+ |
Cloud Native SecurityCon North America 2023 | EVENT | 0.97+ |
Crossplane | ORGANIZATION | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
ChatGPT | TITLE | 0.97+ |
CNCF | EVENT | 0.96+ |
one point | QUANTITY | 0.96+ |
Discord | ORGANIZATION | 0.96+ |
Upbound | ORGANIZATION | 0.95+ |
one set | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.94+ |
first time | QUANTITY | 0.94+ |
Crossplane | TITLE | 0.94+ |
each one | QUANTITY | 0.93+ |
first inaugural | QUANTITY | 0.93+ |
each | QUANTITY | 0.91+ |
supercloud | ORGANIZATION | 0.9+ |
theCUBE | ORGANIZATION | 0.86+ |
IRC | ORGANIZATION | 0.86+ |
re:Inforce | EVENT | 0.85+ |
Inforce | EVENT | 0.84+ |
Azure | TITLE | 0.83+ |
Kubernetes | EVENT | 0.83+ |
Slack | ORGANIZATION | 0.81+ |
one cloud vendor | QUANTITY | 0.77+ |
zero trust | QUANTITY | 0.77+ |
Studios | ORGANIZATION | 0.74+ |
re:Invent | EVENT | 0.74+ |
one control | QUANTITY | 0.73+ |
Kyverno | ORGANIZATION | 0.72+ |
CNCF | ORGANIZATION | 0.71+ |
a second | QUANTITY | 0.68+ |
Supercloud | TITLE | 0.67+ |
Palo | ORGANIZATION | 0.65+ |
Alto | LOCATION | 0.62+ |
Oracle Aspires to be the Netflix of AI | Cube Conversation
(gentle music playing) >> For centuries, we've been captivated by the concept of machines doing the job of humans. And over the past decade or so, we've really focused on AI and the possibility of intelligent machines that can perform cognitive tasks. Now in the past few years, with the popularity of machine learning models ranging from recent ChatGPT to Bert, we're starting to see how AI is changing the way we interact with the world. How is AI transforming the way we do business? And what does the future hold for us there. At theCube, we've covered Oracle's AI and ML strategy for years, which has really been used to drive automation into Oracle's autonomous database. We've talked a lot about MySQL HeatWave in database machine learning, and AI pushed into Oracle's business apps. Oracle, it tends to lead in AI, but not competing as a direct AI player per se, but rather embedding AI and machine learning into its portfolio to enhance its existing products, and bring new services and offerings to the market. Now, last October at Cloud World in Las Vegas, Oracle partnered with Nvidia, which is the go-to AI silicon provider for vendors. And they announced an investment, a pretty significant investment to deploy tens of thousands more Nvidia GPUs to OCI, the Oracle Cloud Infrastructure and build out Oracle's infrastructure for enterprise scale AI. Now, Oracle CEO, Safra Catz said something to the effect of this alliance is going to help customers across industries from healthcare, manufacturing, telecoms, and financial services to overcome the multitude of challenges they face. Presumably she was talking about just driving more automation and more productivity. Now, to learn more about Oracle's plans for AI, we'd like to welcome in Elad Ziklik, who's the vice president of AI services at Oracle. Elad, great to see you. Welcome to the show. >> Thank you. Thanks for having me. >> You're very welcome. So first let's talk about Oracle's path to AI. I mean, it's the hottest topic going for years you've been incorporating machine learning into your products and services, you know, could you tell us what you've been working on, how you got here? >> So great question. So as you mentioned, I think most of the original four-way into AI was on embedding AI and using AI to make our applications, and databases better. So inside mySQL HeatWave, inside our autonomous database in power, we've been driving AI, all of course are SaaS apps. So Fusion, our large enterprise business suite for HR applications and CRM and ELP, and whatnot has built in AI inside it. Most recently, NetSuite, our small medium business SaaS suite started using AI for things like automated invoice processing and whatnot. And most recently, over the last, I would say two years, we've started exposing and bringing these capabilities into the broader OCI Oracle Cloud infrastructure. So the developers, and ISVs and customers can start using our AI capabilities to make their apps better and their experiences and business workflow better, and not just consume these as embedded inside Oracle. And this recent partnership that you mentioned with Nvidia is another step in bringing the best AI infrastructure capabilities into this platform so you can actually build any type of machine learning workflow or AI model that you want on Oracle Cloud. >> So when I look at the market, I see companies out there like DataRobot or C3 AI, there's maybe a half dozen that sort of pop up on my radar anyway. And my premise has always been that most customers, they don't want to become AI experts, they want to buy applications and have AI embedded or they want AI to manage their infrastructure. So my question to you is, how does Oracle help its OCI customers support their business with AI? >> So it's a great question. So I think what most customers want is business AI. They want AI that works for the business. They want AI that works for the enterprise. I call it the last mile of AI. And they want this thing to work. The majority of them don't want to hire a large and expensive data science teams to go and build everything from scratch. They just want the business problem solved by applying AI to it. My best analogy is Lego. So if you think of Lego, Lego has these millions Lego blocks that you can use to build anything that you want. But the majority of people like me or like my kids, they want the Lego death style kit or the Lego Eiffel Tower thing. They want a thing that just works, and it's very easy to use. And still Lego blocks, you still need to build some things together, which just works for the scenario that you're looking for. So that's our focus. Our focus is making it easy for customers to apply AI where they need to, in the right business context. So whether it's embedding it inside the business applications, like adding forecasting capabilities to your supply chain management or financial planning software, whether it's adding chat bots into the line of business applications, integrating these things into your analytics dashboard, even all the way to, we have a new platform piece we call ML applications that allows you to take a machine learning model, and scale it for the thousands of tenants that you would be. 'Cause this is a big problem for most of the ML use cases. It's very easy to build something for a proof of concept or a pilot or a demo. But then if you need to take this and then deploy it across your thousands of customers or your thousands of regions or facilities, then it becomes messy. So this is where we spend our time making it easy to take these things into production in the context of your business application or your business use case that you're interested in right now. >> So you mentioned chat bots, and I want to talk about ChatGPT, but my question here is different, we'll talk about that in a minute. So when you think about these chat bots, the ones that are conversational, my experience anyway is they're just meh, they're not that great. But the ones that actually work pretty well, they have a conditioned response. Now they're limited, but they say, which of the following is your problem? And then if that's one of the following is your problem, you can maybe solve your problem. But this is clearly a trend and it helps the line of business. How does Oracle think about these use cases for your customers? >> Yeah, so I think the key here is exactly what you said. It's about task completion. The general purpose bots are interesting, but as you said, like are still limited. They're getting much better, I'm sure we'll talk about ChatGPT. But I think what most enterprises want is around task completion. I want to automate my expense report processing. So today inside Oracle we have a chat bot where I submit my expenses the bot ask a couple of question, I answer them, and then I'm done. Like I don't need to go to our fancy application, and manually submit an expense report. I do this via Slack. And the key is around managing the right expectations of what this thing is capable of doing. Like, I have a story from I think five, six years ago when technology was much inferior than it is today. Well, one of the telco providers I was working with wanted to roll a chat bot that does realtime translation. So it was for a support center for of the call centers. And what they wanted do is, Hey, we have English speaking employees, whatever, 24/7, if somebody's calling, and the native tongue is different like Hebrew in my case, or Chinese or whatnot, then we'll give them a chat bot that they will interact with and will translate this on the fly and everything would work. And when they rolled it out, the feedback from customers was horrendous. Customers said, the technology sucks. It's not good. I hate it, I hate your company, I hate your support. And what they've done is they've changed the narrative. Instead of, you go to a support center, and you assume you're going to talk to a human, and instead you get a crappy chat bot, they're like, Hey, if you want to talk to a Hebrew speaking person, there's a four hour wait, please leave your phone and we'll call you back. Or you can try a new amazing Hebrew speaking AI powered bot and it may help your use case. Do you want to try it out? And some people said, yeah, let's try it out. Plus one to try it out. And the feedback, even though it was the exact same technology was amazing. People were like, oh my God, this is so innovative, this is great. Even though it was the exact same experience that they hated a few weeks earlier on. So I think the key lesson that I picked from this experience is it's all about setting the right expectations, and working around the right use case. If you are replacing a human, the level is different than if you are just helping or augmenting something that otherwise would take a lot of time. And I think this is the focus that we are doing, picking up the tasks that people want to accomplish or that enterprise want to accomplish for the customers, for the employees. And using chat bots to make those specific ones better rather than, hey, this is going to replace all humans everywhere, and just be better than that. >> Yeah, I mean, to the point you mentioned expense reports. I'm in a Twitter thread and one guy says, my favorite part of business travel is filling out expense reports. It's an hour of excitement to figure out which receipts won't scan. We can all relate to that. It's just the worst. When you think about companies that are building custom AI driven apps, what can they do on OCI? What are the best options for them? Do they need to hire an army of machine intelligence experts and AI specialists? Help us understand your point of view there. >> So over the last, I would say the two or three years we've developed a full suite of machine learning and AI services for, I would say probably much every use case that you would expect right now from applying natural language processing to understanding customer support tickets or social media, or whatnot to computer vision platforms or computer vision services that can understand and detect objects, and count objects on shelves or detect cracks in the pipe or defecting parts, all the way to speech services. It can actually transcribe human speech. And most recently we've launched a new document AI service. That can actually look at unstructured documents like receipts or invoices or government IDs or even proprietary documents, loan application, student application forms, patient ingestion and whatnot and completely automate them using AI. So if you want to do one of the things that are, I would say common bread and butter for any industry, whether it's financial services or healthcare or manufacturing, we have a suite of services that any developer can go, and use easily customized with their own data. You don't need to be an expert in deep learning or large language models. You could just use our automobile capabilities, and build your own version of the models. Just go ahead and use them. And if you do have proprietary complex scenarios that you need customer from scratch, we actually have the most cost effective platform for that. So we have the OCI data science as well as built-in machine learning platform inside the databases inside the Oracle database, and mySQL HeatWave that allow data scientists, python welding people that actually like to build and tweak and control and improve, have everything that they need to go and build the machine learning models from scratch, deploy them, monitor and manage them at scale in production environment. And most of it is brand new. So we did not have these technologies four or five years ago and we've started building them and they're now at enterprise scale over the last couple of years. >> So what are some of the state-of-the-art tools, that AI specialists and data scientists need if they're going to go out and develop these new models? >> So I think it's on three layers. I think there's an infrastructure layer where the Nvidia's of the world come into play. For some of these things, you want massively efficient, massively scaled infrastructure place. So we are the most cost effective and performant large scale GPU training environment today. We're going to be first to onboard the new Nvidia H100s. These are the new super powerful GPU's for large language model training. So we have that covered for you in case you need this 'cause you want to build these ginormous things. You need a data science platform, a platform where you can open a Python notebook, and just use all these fancy open source frameworks and create the models that you want, and then click on a button and deploy it. And it infinitely scales wherever you need it. And in many cases you just need the, what I call the applied AI services. You need the Lego sets, the Lego death style, Lego Eiffel Tower. So we have a suite of these sets for typical scenarios, whether it's cognitive services of like, again, understanding images, or documents all the way to solving particular business problems. So an anomaly detection service, demand focusing service that will be the equivalent of these Lego sets. So if this is the business problem that you're looking to solve, we have services out there where we can bring your data, call an API, train a model, get the model and use it in your production environment. So wherever you want to play, all the way into embedding this thing, inside this applications, obviously, wherever you want to play, we have the tools for you to go and engage from infrastructure to SaaS at the top, and everything in the middle. >> So when you think about the data pipeline, and the data life cycle, and the specialized roles that came out of kind of the (indistinct) era if you will. I want to focus on two developers and data scientists. So the developers, they hate dealing with infrastructure and they got to deal with infrastructure. Now they're being asked to secure the infrastructure, they just want to write code. And a data scientist, they're spending all their time trying to figure out, okay, what's the data quality? And they're wrangling data and they don't spend enough time doing what they want to do. So there's been a lack of collaboration. Have you seen that change, are these approaches allowing collaboration between data scientists and developers on a single platform? Can you talk about that a little bit? >> Yeah, that is a great question. One of the biggest set of scars that I have on my back from for building these platforms in other companies is exactly that. Every persona had a set of tools, and these tools didn't talk to each other and the handoff was painful. And most of the machine learning things evaporate or die on the floor because of this problem. It's very rarely that they are unsuccessful because the algorithm wasn't good enough. In most cases it's somebody builds something, and then you can't take it to production, you can't integrate it into your business application. You can't take the data out, train, create an endpoint and integrate it back like it's too painful. So the way we are approaching this is focused on this problem exactly. We have a single set of tools that if you publish a model as a data scientist and developers, and even business analysts that are seeing a inside of business application could be able to consume it. We have a single model store, a single feature store, a single management experience across the various personas that need to play in this. And we spend a lot of time building, and borrowing a word that cellular folks used, and I really liked it, building inside highways to make it easier to bring these insights into where you need them inside applications, both inside our applications, inside our SaaS applications, but also inside custom third party and even first party applications. And this is where a lot of our focus goes to just because we have dealt with so much pain doing this inside our own SaaS that we now have built the tools, and we're making them available for others to make this process of building a machine learning outcome driven insight in your app easier. And it's not just the model development, and it's not just the deployment, it's the entire journey of taking the data, building the model, training it, deploying it, looking at the real data that comes from the app, and creating this feedback loop in a more efficient way. And that's our focus area. Exactly this problem. >> Well thank you for that. So, last week we had our super cloud two event, and I had Juan Loza on and he spent a lot of time talking about how open Oracle is in its philosophy, and I got a lot of feedback. They were like, Oracle open, I don't really think, but the truth is if you think about database Oracle database, it never met a hardware platform that it didn't like. So in that sense it's open. So, but my point is, a big part of of machine learning and AI is driven by open source tools, frameworks, what's your open source strategy? What do you support from an open source standpoint? >> So I'm a strong believer that you don't actually know, nobody knows where the next slip fog or the next industry shifting innovation in AI is going to come from. If you look six months ago, nobody foreseen Dali, the magical text to image generation and the exploding brought into just art and design type of experiences. If you look six weeks ago, I don't think anybody's seen ChatGPT, and what it can do for a whole bunch of industries. So to me, assuming that a customer or partner or developer would want to lock themselves into only the tools that a specific vendor can produce is ridiculous. 'Cause nobody knows, if anybody claims that they know where the innovation is going to come from in a year or two, let alone in five or 10, they're just wrong or lying. So our strategy for Oracle is to, I call this the Netflix of AI. So if you think about Netflix, they produced a bunch of high quality shows on their own. A few years ago it was House of Cards. Last month my wife and I binge watched Ginny and Georgie, but they also curated a lot of shows that they found around the world and bought them to their customers. So it started with things like Seinfeld or Friends and most recently it was Squid games and those are famous Israeli TV series called Founder that Netflix bought in, and they bought it as is and they gave it the Netflix value. So you have captioning and you have the ability to speed the movie and you have it inside your app, and you can download it and watch it offline and everything, but nobody Netflix was involved in the production of these first seasons. Now if these things hunt and they're great, then the third season or the fourth season will get the full Netflix production value, high value budget, high value location shooting or whatever. But you as a customer, you don't care whether the producer and director, and screenplay writing is a Netflix employee or is somebody else's employee. It is fulfilled by Netflix. I believe that we will become, or we are looking to become the Netflix of AI. We are building a bunch of AI in a bunch of places where we think it's important and we have some competitive advantage like healthcare with Acellular partnership or whatnot. But I want to bring the best AI software and hardware to OCI and do a fulfillment by Oracle on that. So you'll get the Oracle security and identity and single bill and everything you'd expect from a company like Oracle. But we don't have to be building the data science, and the models for everything. So this means both open source recently announced a partnership with Anaconda, the leading provider of Python distribution in the data science ecosystem where we are are doing a joint strategic partnership of bringing all the goodness into Oracle customers as well as in the process of doing the same with Nvidia, and all those software libraries, not just the Hubble, both for other stuff like Triton, but also for healthcare specific stuff as well as other ISVs, other AI leading ISVs that we are in the process of partnering with to get their stuff into OCI and into Oracle so that you can truly consume the best AI hardware, and the best AI software in the world on Oracle. 'Cause that is what I believe our customers would want the ability to choose from any open source engine, and honestly from any ISV type of solution that is AI powered and they want to use it in their experiences. >> So you mentioned ChatGPT, I want to talk about some of the innovations that are coming. As an AI expert, you see ChatGPT on the one hand, I'm sure you weren't surprised. On the other hand, maybe the reaction in the market, and the hype is somewhat surprising. You know, they say that we tend to under or over-hype things in the early stages and under hype them long term, you kind of use the internet as example. What's your take on that premise? >> So. I think that this type of technology is going to be an inflection point in how software is being developed. I truly believe this. I think this is an internet style moment, and the way software interfaces, software applications are being developed will dramatically change over the next year two or three because of this type of technologies. I think there will be industries that will be shifted. I think education is a good example. I saw this thing opened on my son's laptop. So I think education is going to be transformed. Design industry like images or whatever, it's already been transformed. But I think that for mass adoption, like beyond the hype, beyond the peak of inflected expectations, if I'm using Gartner terminology, I think certain things need to go and happen. One is this thing needs to become more reliable. So right now it is a complete black box that sometimes produce magic, and sometimes produce just nonsense. And it needs to have better explainability and better lineage to, how did you get to this answer? 'Cause I think enterprises are going to really care about the things that they surface with the customers or use internally. So I think that is one thing that's going to come out. And the other thing that's going to come out is I think it's going to come industry specific large language models or industry specific ChatGPTs. Something like how OpenAI did co-pilot for writing code. I think we will start seeing this type of apps solving for specific business problems, understanding contracts, understanding healthcare, writing doctor's notes on behalf of doctors so they don't have to spend time manually recording and analyzing conversations. And I think that would become the sweet spot of this thing. There will be companies, whether it's OpenAI or Microsoft or Google or hopefully Oracle that will use this type of technology to solve for specific very high value business needs. And I think this will change how interfaces happen. So going back to your expense report, the world of, I'm going to go into an app, and I'm going to click on seven buttons in order to get some job done like this world is gone. Like I'm going to say, hey, please do this and that. And I expect an answer to come out. I've seen a recent demo about, marketing in sales. So a customer sends an email that is interested in something and then a ChatGPT powered thing just produces the answer. I think this is how the world is going to evolve. Like yes, there's a ton of hype, yes, it looks like magic and right now it is magic, but it's not yet productive for most enterprise scenarios. But in the next 6, 12, 24 months, this will start getting more dependable, and it's going to change how these industries are being managed. Like I think it's an internet level revolution. That's my take. >> It's very interesting. And it's going to change the way in which we have. Instead of accessing the data center through APIs, we're going to access it through natural language processing and that opens up technology to a huge audience. Last question, is a two part question. And the first part is what you guys are working on from the futures, but the second part of the question is, we got data scientists and developers in our audience. They love the new shiny toy. So give us a little glimpse of what you're working on in the future, and what would you say to them to persuade them to check out Oracle's AI services? >> Yep. So I think there's two main things that we're doing, one is around healthcare. With a new recent acquisition, we are spending a significant effort around revolutionizing healthcare with AI. Of course many scenarios from patient care using computer vision and cameras through automating, and making better insurance claims to research and pharma. We are making the best models from leading organizations, and internal available for hospitals and researchers, and insurance providers everywhere. And we truly are looking to become the leader in AI for healthcare. So I think that's a huge focus area. And the second part is, again, going back to the enterprise AI angle. Like we want to, if you have a business problem that you want to apply here to solve, we want to be your platform. Like you could use others if you want to build everything complicated and whatnot. We have a platform for that as well. But like, if you want to apply AI to solve a business problem, we want to be your platform. We want to be the, again, the Netflix of AI kind of a thing where we are the place for the greatest AI innovations accessible to any developer, any business analyst, any user, any data scientist on Oracle Cloud. And we're making a significant effort on these two fronts as well as developing a lot of the missing pieces, and building blocks that we see are needed in this space to make truly like a great experience for developers and data scientists. And what would I recommend? Get started, try it out. We actually have a shameless sales plug here. We have a free deal for all of our AI services. So it typically cost you nothing. I would highly recommend to just go, and try these things out. Go play with it. If you are a python welding developer, and you want to try a little bit of auto mail, go down that path. If you're not even there and you're just like, hey, I have these customer feedback things and I want to try out, if I can understand them and apply AI and visualize, and do some cool stuff, we have services for that. My recommendation is, and I think ChatGPT got us 'cause I see people that have nothing to do with AI, and can't even spell AI going and trying it out. I think this is the time. Go play with these things, go play with these technologies and find what AI can do to you or for you. And I think Oracle is a great place to start playing with these things. >> Elad, thank you. Appreciate you sharing your vision of making Oracle the Netflix of AI. Love that and really appreciate your time. >> Awesome. Thank you. Thank you for having me. >> Okay. Thanks for watching this Cube conversation. This is Dave Vellante. We'll see you next time. (gentle music playing)
SUMMARY :
AI and the possibility Thanks for having me. I mean, it's the hottest So the developers, So my question to you is, and scale it for the thousands So when you think about these chat bots, and the native tongue It's just the worst. So over the last, and create the models that you want, of the (indistinct) era if you will. So the way we are approaching but the truth is if you the movie and you have it inside your app, and the hype is somewhat surprising. and the way software interfaces, and what would you say to them and you want to try a of making Oracle the Netflix of AI. Thank you for having me. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Netflix | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Elad Ziklik | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Safra Catz | PERSON | 0.99+ |
Elad | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
two part | QUANTITY | 0.99+ |
fourth season | QUANTITY | 0.99+ |
House of Cards | TITLE | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
first seasons | QUANTITY | 0.99+ |
Seinfeld | TITLE | 0.99+ |
Last month | DATE | 0.99+ |
third season | QUANTITY | 0.99+ |
four hour | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Hebrew | OTHER | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
last October | DATE | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two fronts | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
Juan Loza | PERSON | 0.99+ |
Founder | TITLE | 0.99+ |
four | DATE | 0.99+ |
six weeks ago | DATE | 0.99+ |
today | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
python | TITLE | 0.99+ |
five | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
two developers | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
H100s | COMMERCIAL_ITEM | 0.98+ |
five years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Friends | TITLE | 0.98+ |
one guy | QUANTITY | 0.98+ |
10 | QUANTITY | 0.97+ |
Bob Muglia, George Gilbert & Tristan Handy | How Supercloud will Support a new Class of Data Apps
(upbeat music) >> Hello, everybody. This is Dave Vellante. Welcome back to Supercloud2, where we're exploring the intersection of data analytics and the future of cloud. In this segment, we're going to look at how the Supercloud will support a new class of applications, not just work that runs on multiple clouds, but rather a new breed of apps that can orchestrate things in the real world. Think Uber for many types of businesses. These applications, they're not about codifying forms or business processes. They're about orchestrating people, places, and things in a business ecosystem. And I'm pleased to welcome my colleague and friend, George Gilbert, former Gartner Analyst, Wiki Bond market analyst, former equities analyst as my co-host. And we're thrilled to have Tristan Handy, who's the founder and CEO of DBT Labs and Bob Muglia, who's the former President of Microsoft's Enterprise business and former CEO of Snowflake. Welcome all, gentlemen. Thank you for coming on the program. >> Good to be here. >> Thanks for having us. >> Hey, look, I'm going to start actually with the SuperCloud because both Tristan and Bob, you've read the definition. Thank you for doing that. And Bob, you have some really good input, some thoughts on maybe some of the drawbacks and how we can advance this. So what are your thoughts in reading that definition around SuperCloud? >> Well, I thought first of all that you did a very good job of laying out all of the characteristics of it and helping to define it overall. But I do think it can be tightened a bit, and I think it's helpful to do it in as short a way as possible. And so in the last day I've spent a little time thinking about how to take it and write a crisp definition. And here's my go at it. This is one day old, so gimme a break if it's going to change. And of course we have to follow the industry, and so that, and whatever the industry decides, but let's give this a try. So in the way I think you're defining it, what I would say is a SuperCloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. >> Boom. Nice. Okay, great. I'm going to go back and read the script on that one and tighten that up a bit. Thank you for spending the time thinking about that. Tristan, would you add anything to that or what are your thoughts on the whole SuperCloud concept? >> So as I read through this, I fully realize that we need a word for this thing because I have experienced the inability to talk about it as well. But for many of us who have been living in the Confluence, Snowflake, you know, this world of like new infrastructure, this seems fairly uncontroversial. Like I read through this, and I'm just like, yeah, this is like the world I've been living in for years now. And I noticed that you called out Snowflake for being an example of this, but I think that there are like many folks, myself included, for whom this world like fully exists today. >> Yeah, I think that's a fair, I dunno if it's criticism, but people observe, well, what's the big deal here? It's just kind of what we're living in today. It reminds me of, you know, Tim Burns Lee saying, well, this is what the internet was supposed to be. It was supposed to be Web 2.0, so maybe this is what multi-cloud was supposed to be. Let's turn our attention to apps. Bob first and then go to Tristan. Bob, what are data apps to you? When people talk about data products, is that what they mean? Are we talking about something more, different? What are data apps to you? >> Well, to understand data apps, it's useful to contrast them to something, and I just use the simple term people apps. I know that's a little bit awkward, but it's clear. And almost everything we work with, almost every application that we're familiar with, be it email or Salesforce or any consumer app, those are applications that are targeted at responding to people. You know, in contrast, a data application reacts to changes in data and uses some set of analytic services to autonomously take action. So where applications that we're familiar with respond to people, data apps respond to changes in data. And they both do something, but they do it for different reasons. >> Got it. You know, George, you and I were talking about, you know, it comes back to SuperCloud, broad definition, narrow definition. Tristan, how do you see it? Do you see it the same way? Do you have a different take on data apps? >> Oh, geez. This is like a conversation that I don't know has an end. It's like been, I write a substack, and there's like this little community of people who all write substack. We argue with each other about these kinds of things. Like, you know, as many different takes on this question as you can find, but the way that I think about it is that data products are atomic units of functionality that are fundamentally data driven in nature. So a data product can be as simple as an interactive dashboard that is like actually had design thinking put into it and serves a particular user group and has like actually gone through kind of a product development life cycle. And then a data app or data application is a kind of cohesive end-to-end experience that often encompasses like many different data products. So from my perspective there, this is very, very related to the way that these things are produced, the kinds of experiences that they're provided, that like data innovates every product that we've been building in, you know, software engineering for, you know, as long as there have been computers. >> You know, Jamak Dagani oftentimes uses the, you know, she doesn't name Spotify, but I think it's Spotify as that kind of example she uses. But I wonder if we can maybe try to take some examples. If you take, like George, if you take a CRM system today, you're inputting leads, you got opportunities, it's driven by humans, they're really inputting the data, and then you got this system that kind of orchestrates the business process, like runs a forecast. But in this data driven future, are we talking about the app itself pulling data in and automatically looking at data from the transaction systems, the call center, the supply chain and then actually building a plan? George, is that how you see it? >> I go back to the example of Uber, may not be the most sophisticated data app that we build now, but it was like one of the first where you do have users interacting with their devices as riders trying to call a car or driver. But the app then looks at the location of all the drivers in proximity, and it matches a driver to a rider. It calculates an ETA to the rider. It calculates an ETA then to the destination, and it calculates a price. Those are all activities that are done sort of autonomously that don't require a human to type something into a form. The application is using changes in data to calculate an analytic product and then to operationalize that, to assign the driver to, you know, calculate a price. Those are, that's an example of what I would think of as a data app. And my question then I guess for Tristan is if we don't have all the pieces in place for sort of mainstream companies to build those sorts of apps easily yet, like how would we get started? What's the role of a semantic layer in making that easier for mainstream companies to build? And how do we get started, you know, say with metrics? How does that, how does that take us down that path? >> So what we've seen in the past, I dunno, decade or so, is that one of the most successful business models in infrastructure is taking hard things and rolling 'em up behind APIs. You take messaging, you take payments, and you all of a sudden increase the capability of kind of your median application developer. And you say, you know, previously you were spending all your time being focused on how do you accept credit cards, how do you send SMS payments, and now you can focus on your business logic, and just create the thing. One of, interestingly, one of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse, inside of your data lake. These are core concepts that, you know, you would imagine that the business would be able to create applications around very easily, but in fact that's not the case. It's actually quite challenging to, and involves a lot of data engineering pipeline and all this work to make these available. And so if you really want to make it very easy to create some of these data experiences for users, you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes, and they don't need to. >> So how rich can that API layer grow if you start with metric definitions that you've defined? And DBT has, you know, the metric, the dimensions, the time grain, things like that, that's a well scoped sort of API that people can work within. How much can you extend that to say non-calculated business rules or governance information like data reliability rules, things like that, or even, you know, features for an AIML feature store. In other words, it starts, you started pragmatically, but how far can you grow? >> Bob is waiting with bated breath to answer this question. I'm, just really quickly, I think that we as a company and DBT as a product tend to be very pragmatic. We try to release the simplest possible version of a thing, get it out there, and see if people use it. But the idea that, the concept of a metric is really just a first landing pad. The really, there is a physical manifestation of the data and then there's a logical manifestation of the data. And what we're trying to do here is make it very easy to access the logical manifestation of the data, and metric is a way to look at that. Maybe an entity, a customer, a user is another way to look at that. And I'm sure that there will be more kind of logical structures as well. >> So, Bob, chime in on this. You know, what's your thoughts on the right architecture behind this, and how do we get there? >> Yeah, well first of all, I think one of the ways we get there is by what companies like DBT Labs and Tristan is doing, which is incrementally taking and building on the modern data stack and extending that to add a semantic layer that describes the data. Now the way I tend to think about this is a fairly major shift in the way we think about writing applications, which is today a code first approach to moving to a world that is model driven. And I think that's what the big change will be is that where today we think about data, we think about writing code, and we use that to produce APIs as Tristan said, which encapsulates those things together in some form of services that are useful for organizations. And that idea of that encapsulation is never going to go away. It's very, that concept of an API is incredibly useful and will exist well into the future. But what I think will happen is that in the next 10 years, we're going to move to a world where organizations are defining models first of their data, but then ultimately of their business process, their entire business process. Now the concept of a model driven world is a very old concept. I mean, I first started thinking about this and playing around with some early model driven tools, probably before Tristan was born in the early 1980s. And those tools didn't work because the semantics associated with executing the model were too complex to be written in anything other than a procedural language. We're now reaching a time where that is changing, and you see it everywhere. You see it first of all in the world of machine learning and machine learning models, which are taking over more and more of what applications are doing. And I think that's an incredibly important step. And learned models are an important part of what people will do. But if you look at the world today, I will claim that we've always been modeling. Modeling has existed in computers since there have been integrated circuits and any form of computers. But what we do is what I would call implicit modeling, which means that it's the model is written on a whiteboard. It's in a bunch of Slack messages. It's on a set of napkins in conversations that happen and during Zoom. That's where the model gets defined today. It's implicit. There is one in the system. It is hard coded inside application logic that exists across many applications with humans being the glue that connects those models together. And really there is no central place you can go to understand the full attributes of the business, all of the business rules, all of the business logic, the business data. That's going to change in the next 10 years. And we'll start to have a world where we can define models about what we're doing. Now in the short run, the most important models to build are data models and to describe all of the attributes of the data and their relationships. And that's work that DBT Labs is doing. A number of other companies are doing that. We're taking steps along that way with catalogs. People are trying to build more complete ontologies associated with that. The underlying infrastructure is still super, super nascent. But what I think we'll see is this infrastructure that exists today that's building learned models in the form of machine learning programs. You know, some of these incredible machine learning programs in foundation models like GPT and DALL-E and all of the things that are happening in these global scale models, but also all of that needs to get applied to the domains that are appropriate for a business. And I think we'll see the infrastructure developing for that, that can take this concept of learned models and put it together with more explicitly defined models. And this is where the concept of knowledge graphs come in and then the technology that underlies that to actually implement and execute that, which I believe are relational knowledge graphs. >> Oh, oh wow. There's a lot to unpack there. So let me ask the Colombo question, Tristan, we've been making fun of your youth. We're just, we're just jealous. Colombo, I'll explain it offline maybe. >> I watch Colombo. >> Okay. All right, good. So but today if you think about the application stack and the data stack, which is largely an analytics pipeline. They're separate. Do they, those worlds, do they have to come together in order to achieve Bob's vision? When I talk to practitioners about that, they're like, well, I don't want to complexify the application stack cause the data stack today is so, you know, hard to manage. But but do those worlds have to come together? And you know, through that model, I guess abstraction or translation that Bob was just describing, how do you guys think about that? Who wants to take that? >> I think it's inevitable that data and AI are going to become closer together? I think that the infrastructure there has been moving in that direction for a long time. Whether you want to use the Lakehouse portmanteau or not. There's also, there's a next generation of data tech that is still in the like early stage of being developed. There's a company that I love that is essentially Cross Cloud Lambda, and it's just a wonderful abstraction for computing. So I think that, you know, people have been predicting that these worlds are going to come together for awhile. A16Z wrote a great post on this back in I think 2020, predicting this, and I've been predicting this since since 2020. But what's not clear is the timeline, but I think that this is still just as inevitable as it's been. >> Who's that that does Cross Cloud? >> Let me follow up on. >> Who's that, Tristan, that does Cross Cloud Lambda? Can you name names? >> Oh, they're called Modal Labs. >> Modal Labs, yeah, of course. All right, go ahead, George. >> Let me ask about this vision of trying to put the semantics or the code that represents the business with the data. It gets us to a world that's sort of more data centric, where data's not locked inside or behind the APIs of different applications so that we don't have silos. But at the same time, Bob, I've heard you talk about building the semantics gradually on top of, into a knowledge graph that maybe grows out of a data catalog. And the vision of getting to that point, essentially the enterprise's metadata and then the semantics you're going to add onto it are really stored in something that's separate from the underlying operational and analytic data. So at the same time then why couldn't we gradually build semantics beyond the metric definitions that DBT has today? In other words, you build more and more of the semantics in some layer that DBT defines and that sits above the data management layer, but any requests for data have to go through the DBT layer. Is that a workable alternative? Or where, what type of limitations would you face? >> Well, I think that it is the way the world will evolve is to start with the modern data stack and, you know, which is operational applications going through a data pipeline into some form of data lake, data warehouse, the Lakehouse, whatever you want to call it. And then, you know, this wide variety of analytics services that are built together. To the point that Tristan made about machine learning and data coming together, you see that in every major data cloud provider. Snowflake certainly now supports Python and Java. Databricks is of course building their data warehouse. Certainly Google, Microsoft and Amazon are doing very, very similar things in terms of building complete solutions that bring together an analytics stack that typically supports languages like Python together with the data stack and the data warehouse. I mean, all of those things are going to evolve, and they're not going to go away because that infrastructure is relatively new. It's just being deployed by companies, and it solves the problem of working with petabytes of data if you need to work with petabytes of data, and nothing will do that for a long time. What's missing is a layer that understands and can model the semantics of all of this. And if you need to, if you want to model all, if you want to talk about all the semantics of even data, you need to think about all of the relationships. You need to think about how these things connect together. And unfortunately, there really is no platform today. None of our existing platforms are ultimately sufficient for this. It was interesting, I was just talking to a customer yesterday, you know, a large financial organization that is building out these semantic layers. They're further along than many companies are. And you know, I asked what they're building it on, and you know, it's not surprising they're using a, they're using combinations of some form of search together with, you know, textual based search together with a document oriented database. In this case it was Cosmos. And that really is kind of the state of the art right now. And yet those products were not built for this. They don't really, they can't manage the complicated relationships that are required. They can't issue the queries that are required. And so a new generation of database needs to be developed. And fortunately, you know, that is happening. The world is developing a new set of relational algorithms that will be able to work with hundreds of different relations. If you look at a SQL database like Snowflake or a big query, you know, you get tens of different joins coming together, and that query is going to take a really long time. Well, fortunately, technology is evolving, and it's possible with new join algorithms, worst case, optimal join algorithms they're called, where you can join hundreds of different relations together and run semantic queries that you simply couldn't run. Now that technology is nascent, but it's really important, and I think that will be a requirement to have this semantically reach its full potential. In the meantime, Tristan can do a lot of great things by building up on what he's got today and solve some problems that are very real. But in the long run I think we'll see a new set of databases to support these models. >> So Tristan, you got to respond to that, right? You got to, so take the example of Snowflake. We know it doesn't deal well with complex joins, but they're, they've got big aspirations. They're building an ecosystem to really solve some of these problems. Tristan, you guys are part of that ecosystem, and others, but please, your thoughts on what Bob just shared. >> Bob, I'm curious if, I would have no idea what you were talking about except that you introduced me to somebody who gave me a demo of a thing and do you not want to go there right now? >> No, I can talk about it. I mean, we can talk about it. Look, the company I've been working with is Relational AI, and they're doing this work to actually first of all work across the industry with academics and research, you know, across many, many different, over 20 different research institutions across the world to develop this new set of algorithms. They're all fully published, just like SQL, the underlying algorithms that are used by SQL databases are. If you look today, every single SQL database uses a similar set of relational algorithms underneath that. And those algorithms actually go back to system R and what IBM developed in the 1970s. We're just, there's an opportunity for us to build something new that allows you to take, for example, instead of taking data and grouping it together in tables, treat all data as individual relations, you know, a key and a set of values and then be able to perform purely relational operations on it. If you go back to what, to Codd, and what he wrote, he defined two things. He defined a relational calculus and relational algebra. And essentially SQL is a query language that is translated by the query processor into relational algebra. But however, the calculus of SQL is not even close to the full semantics of the relational mathematics. And it's possible to have systems that can do everything and that can store all of the attributes of the data model or ultimately the business model in a form that is much more natural to work with. >> So here's like my short answer to this. I think that we're dealing in different time scales. I think that there is actually a tremendous amount of work to do in the semantic layer using the kind of technology that we have on the ground today. And I think that there's, I don't know, let's say five years of like really solid work that there is to do for the entire industry, if not more. But the wonderful thing about DBT is that it's independent of what the compute substrate is beneath it. And so if we develop new platforms, new capabilities to describe semantic models in more fine grain detail, more procedural, then we're going to support that too. And so I'm excited about all of it. >> Yeah, so interpreting that short answer, you're basically saying, cause Bob was just kind of pointing to you as incremental, but you're saying, yeah, okay, we're applying it for incremental use cases today, but we can accommodate a much broader set of examples in the future. Is that correct, Tristan? >> I think you're using the word incremental as if it's not good, but I think that incremental is great. We have always been about applying incremental improvement on top of what exists today, but allowing practitioners to like use different workflows to actually make use of that technology. So yeah, yeah, we are a very incremental company. We're going to continue being that way. >> Well, I think Bob was using incremental as a pejorative. I mean, I, but to your point, a lot. >> No, I don't think so. I want to stop that. No, I don't think it's pejorative at all. I think incremental, incremental is usually the most successful path. >> Yes, of course. >> In my experience. >> We agree, we agree on that. >> Having tried many, many moonshot things in my Microsoft days, I can tell you that being incremental is a good thing. And I'm a very big believer that that's the way the world's going to go. I just think that there is a need for us to build something new and that ultimately that will be the solution. Now you can argue whether it's two years, three years, five years, or 10 years, but I'd be shocked if it didn't happen in 10 years. >> Yeah, so we all agree that incremental is less disruptive. Boom, but Tristan, you're, I think I'm inferring that you believe you have the architecture to accommodate Bob's vision, and then Bob, and I'm inferring from Bob's comments that maybe you don't think that's the case, but please. >> No, no, no. I think that, so Bob, let me put words into your mouth and you tell me if you disagree, DBT is completely useless in a world where a large scale cloud data warehouse doesn't exist. We were not able to bring the power of Python to our users until these platforms started supporting Python. Like DBT is a layer on top of large scale computing platforms. And to the extent that those platforms extend their functionality to bring more capabilities, we will also service those capabilities. >> Let me try and bridge the two. >> Yeah, yeah, so Bob, Bob, Bob, do you concur with what Tristan just said? >> Absolutely, I mean there's nothing to argue with in what Tristan just said. >> I wanted. >> And it's what he's doing. It'll continue to, I believe he'll continue to do it, and I think it's a very good thing for the industry. You know, I'm just simply saying that on top of that, I would like to provide Tristan and all of those who are following similar paths to him with a new type of database that can actually solve these problems in a much more architected way. And when I talk about Cosmos with something like Mongo or Cosmos together with Elastic, you're using Elastic as the join engine, okay. That's the purpose of it. It becomes a poor man's join engine. And I kind of go, I know there's a better answer than that. I know there is, but that's kind of where we are state of the art right now. >> George, we got to wrap it. So give us the last word here. Go ahead, George. >> Okay, I just, I think there's a way to tie together what Tristan and Bob are both talking about, and I want them to validate it, which is for five years we're going to be adding or some number of years more and more semantics to the operational and analytic data that we have, starting with metric definitions. My question is for Bob, as DBT accumulates more and more of those semantics for different enterprises, can that layer not run on top of a relational knowledge graph? And what would we lose by not having, by having the knowledge graph store sort of the joins, all the complex relationships among the data, but having the semantics in the DBT layer? >> Well, I think this, okay, I think first of all that DBT will be an environment where many of these semantics are defined. The question we're asking is how are they stored and how are they processed? And what I predict will happen is that over time, as companies like DBT begin to build more and more richness into their semantic layer, they will begin to experience challenges that customers want to run queries, they want to ask questions, they want to use this for things where the underlying infrastructure becomes an obstacle. I mean, this has happened in always in the history, right? I mean, you see major advances in computer science when the data model changes. And I think we're on the verge of a very significant change in the way data is stored and structured, or at least metadata is stored and structured. Again, I'm not saying that anytime in the next 10 years, SQL is going to go away. In fact, more SQL will be written in the future than has been written in the past. And those platforms will mature to become the engines, the slicer dicers of data. I mean that's what they are today. They're incredibly powerful at working with large amounts of data, and that infrastructure is maturing very rapidly. What is not maturing is the infrastructure to handle all of the metadata and the semantics that that requires. And that's where I say knowledge graphs are what I believe will be the solution to that. >> But Tristan, bring us home here. It sounds like, let me put pause at this, is that whatever happens in the future, we're going to leverage the vast system that has become cloud that we're talking about a supercloud, sort of where data lives irrespective of physical location. We're going to have to tap that data. It's not necessarily going to be in one place, but give us your final thoughts, please. >> 100% agree. I think that the data is going to live everywhere. It is the responsibility for both the metadata systems and the data processing engines themselves to make sure that we can join data across cloud providers, that we can join data across different physical regions and that we as practitioners are going to kind of start forgetting about details like that. And we're going to start thinking more about how we want to arrange our teams, how does the tooling that we use support our team structures? And that's when data mesh I think really starts to get very, very critical as a concept. >> Guys, great conversation. It was really awesome to have you. I can't thank you enough for spending time with us. Really appreciate it. >> Thanks a lot. >> All right. This is Dave Vellante for George Gilbert, John Furrier, and the entire Cube community. Keep it right there for more content. You're watching SuperCloud2. (upbeat music)
SUMMARY :
and the future of cloud. And Bob, you have some really and I think it's helpful to do it I'm going to go back and And I noticed that you is that what they mean? that we're familiar with, you know, it comes back to SuperCloud, is that data products are George, is that how you see it? that don't require a human to is that one of the most And DBT has, you know, the And I'm sure that there will be more on the right architecture is that in the next 10 years, So let me ask the Colombo and the data stack, which is that is still in the like Modal Labs, yeah, of course. and that sits above the and that query is going to So Tristan, you got to and that can store all of the that there is to do for the pointing to you as incremental, but allowing practitioners to I mean, I, but to your point, a lot. the most successful path. that that's the way the that you believe you have the architecture and you tell me if you disagree, there's nothing to argue with And I kind of go, I know there's George, we got to wrap it. and more of those semantics and the semantics that that requires. is that whatever happens in the future, and that we as practitioners I can't thank you enough John Furrier, and the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tristan | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
John | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Steve Mullaney | PERSON | 0.99+ |
Katie | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Charles | PERSON | 0.99+ |
Mike Dooley | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Maribel Lopez | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mike Wolf | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Merim | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Rossi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Chris Wegmann | PERSON | 0.99+ |
Whole Foods | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Chris Hoff | PERSON | 0.99+ |
Jamak Dagani | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Caterpillar | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Marianna Tessel | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Jerome | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lori MacVittie | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Seattle | LOCATION | 0.99+ |
10 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
Peter McKee | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Wendi Whitmore, Palo Alto Networks | Palo Alto Networks Ignite22
>>The Cube presents Ignite 22, brought to you by Palo Alto Networks. >>Welcome back to Vegas. Guys. We're happy that you're here. Lisa Martin here covering with Dave Valante, Palo Alto Networks Ignite 22. We're at MGM Grand. This is our first day, Dave of two days of cube coverage. We've been having great conversations with the ecosystem with Palo Alto executives, with partners. One of the things that they have is unit 42. We're gonna be talking with them next about cyber intelligence. And the threat data that they get is >>Incredible. Yeah. They have all the data, they know what's going on, and of course things are changing. The state of play changes. Hold on a second. I got a text here. Oh, my Netflix account was frozen. Should I click on this link? Yeah. What do you think? Have you had a, it's, have you had a little bit more of that this holiday season? Yeah, definitely. >>Unbelievable, right? A lot of smishing going on. >>Yeah, they're very clever. >>Yeah, we're very pleased to welcome back one of our alumni to the queue. Wendy Whitmore is here, the SVP of Unit 42. Welcome back, Wendy. Great to have >>You. Thanks Lisa. So >>Unit 42 created back in 2014. One of the things that I saw that you said in your keynote this morning or today was everything old is still around and it's co, it's way more prolific than ever. What are some of the things that Unit 42 is seeing these days with, with respect to cyber threats as the landscape has changed so much the last two years alone? >>You know, it, it has. So it's really interesting. I've been responding to these breaches for over two decades now, and I can tell you that there are a lot of new and novel techniques. I love that you already highlighted Smishing, right? In the opening gate. Right. Because that is something that a year ago, no one knew what that word was. I mean, we, it's probably gonna be invented this year, right? But that said, so many of the tactics that we have previously seen, when it comes to just general espionage techniques, right? Data act filtration, intellectual property theft, those are going on now more than ever. And you're not hearing about them as much in the news because there are so many other things, right? We're under the landscape of a major war going on between Russia and Ukraine of ransomware attacks, you know, occurring on a weekly basis. And so we keep hearing about those, but ultimately these nations aid actors are using that top cover, if you will, as a great distraction. It's almost like a perfect storm for them to continue conducting so much cyber espionage work that like we may not be feeling that today, but years down the road, they're, the work that they're doing today is gonna have really significant impact. >>Ransomware has become a household word in the last couple of years. I think even my mom knows what it is, to some degree. Yeah. But the threat actors are far more sophisticated than they've ever written. They're very motivated. They're very well funded. I think I've read a stat recently in the last year that there's a ransomware attack once every 11 seconds. And of course we only hear about the big ones. But that is a concern that goes all the way up to the board. >>Yeah. You know, we have a stat in our ransomware threat report that talks about how often victims are posted on leak sites. And I think it's once every seven minutes at this point that a new victim is posted. Meaning a victim has had their data, a victim organization had their data stolen and posted on some leak site in the attempt to be extorted. So that has become so common. One of the shifts that we've seen this year in particular and in recent months, you know, a year ago when I was at Ignite, which was virtual, we talked about quadruple extortion, meaning four different ways that these ransomware actors would go out and try to make money from these attacks in what they're doing now is often going to just one, which is, I don't even wanna bother with encrypting your data now, because that means that in order to get paid, I probably have to decrypt it. Right? That's a lot of work. It's time consuming. It's kind of painstaking. And so what they've really looked to do now is do the extortion where they simply steal the data and then threaten to post it on these leak sites, you know, release it other parts of the web and, and go from there. And so that's really a blending of these techniques of traditional cyber espionage with intellectual property theft. Wow. >>How trustworthy are those guys in terms of, I mean, these are hackers, right? In terms of it's really the, the hacker honor system, isn't it? I mean, if you get compromised like that, you really beholden to criminals. And so, you >>Know, so that's one of the key reasons why having the threat intelligence is so important, right? Understanding which group that you're dealing with and what their likelihood of paying is, what's their modus operandi. It's become even more important now because these groups switch teams more frequently than NFL trades, you know, free agents during the regular season, right? Or players become free agents. And that's because their infrastructure. So the, you know, infrastructure, the servers, the systems that they're using to conduct these attacks from is actually largely being disrupted more from law enforcement, international intelligence agencies working together with public private partnerships. So what they're doing is saying, okay, great. All that infrastructure that I just had now is, is burned, right? It's no longer effective. So then they'll disband a team and then they'll recruit a new team and it's constant like mixing and matching in players. >>All that said, even though that's highly dynamic, one of the other areas that they pride themselves on is customer service. So, and I think it's interesting because, you know, when I said they're not wanting to like do all the decryption? Yeah. Cuz that's like painful techni technical slow work. But on the customer service side, they will create these customer service portals immediately stand one up, say, you know, hey it's, it's like an Amazon, you know, if you've ever had to return a package on Amazon for example, and you need to click through and like explain, you know, Hey, I didn't receive this package. A portal window pops up, you start talking to either a bot or a live agent on the backend. In this case they're hu what appeared to be very much humans who are explaining to you exactly what happened, what they're asking for, super pleasant, getting back within minutes of a response. And they know that in order for them to get paid, they need to have good customer service because otherwise they're not going to, you know, have a business. How, >>So what's the state of play look like from between nation states, criminals and how, how difficult or not so difficult is it for you to identify? Do you have clear signatures? My understanding in with Solar Winds it was a little harder, but maybe help us understand and help our audience understand what the state of play is right now. >>One of the interesting things that I think is occurring, and I highlighted this this morning, is this idea of convergence. And so I'll break it down for one example relates to the type of malware or tools that these attackers use. So traditionally, if we looked at a nation state actor like China or Russia, they were very, very specific and very strategic about the types of victims that they were going to go after when they had zero day. So, you know, new, new malware out there, new vulnerabilities that could be exploited only by them because the rest of the world didn't know about it. They might have one organization that they would target that at, at most, a handful and all very strategic for their objective. They wanted to keep that a secret as long as possible. Now what we're seeing actually is those same attackers going towards one, a much larger supply chain. >>So, so lorenzen is a great example of that. The Hafnia attacks towards Microsoft Exchange server last year. All great examples of that. But what they're also doing is instead of using zero days as much, or you know, because those are expensive to build, they take a lot of time, a lot of funding, a lot of patience and research. What they're doing is using commercially available tools. And so there's a tool that our team identified earlier this year called Brute Rael, C4 or BRC four for short. And that's a tool that we now know that nation state actors are using. But just two weeks ago we invested a ransomware attack where the ransomware actor was using that same piece of tooling. So to your point, yak can get difficult for defenders when you're looking through and saying, well wait, they're all using some of the same tools right now and some of the same approaches when it comes to nation states, that's great for them because they can blend into the noise and it makes it harder to identify as >>Quickly. And, and is that an example of living off the land or is that B BRC four sort of a homegrown hacker tool? Is it, is it a, is it a commercial >>Off the shelf? So it's a tool that was actually, so you can purchase it, I believe it's about 2,500 US dollars for a license. It was actually created by a former Red teamer from a couple well-known companies in the industry who then decided, well hey, I built this tool for work, I'm gonna sell this. Well great for Red teamers that are, you know, legitimately doing good work, but not great now because they're, they built a, a strong tool that has the ability to hide amongst a, a lot of protocols. It can actually hide within Slack and teams to where you can't even see the data is being exfiltrated. And so there's a lot of concern. And then now the reality that it gets into the wrong hands of nation state actors in ransomware actors, one of the really interesting things about that piece of malware is it has a setting where you can change wallpaper. And I don't know if you know offhand, you know what that means, but you know, if that comes to mind, what you would do with it. Well certainly a nation state actor is never gonna do something like that, right? But who likes to do that are ransomware actors who can go in and change the background wallpaper on a desktop that says you've been hacked by XYZ organization and let you know what's going on. So pretty interesting, obviously the developer doing some work there for different parts of the, you know, nefarious community. >>Tremendous amount of sophistication that's gone on the last couple of years alone. I was just reading that Unit 42 is now a founding member of the Cyber Threat Alliance includes now more than 35 organizations. So you guys are getting a very broad picture of today's threat landscape. How can customers actually achieve cyber resilience? Is it achievable and how do you help? >>So I, I think it is achievable. So let me kind of parse out the question, right. So the Cyber Threat Alliance, the J C D C, the Cyber Safety Review Board, which I'm a member of, right? I think one of the really cool things about Palo Alto Networks is just our partnerships. So those are just a handful. We've got partnerships with over 200 organizations. We work closely with the Ukrainian cert, for example, sharing information, incredible information about like what's going on in the war, sharing technical details. We do that with Interpol on a daily basis where, you know, we're sharing information. Just last week the Africa cyber surge operation was announced where millions of nodes were taken down that were part of these larger, you know, system of C2 channels that attackers are using to conduct exploits and attacks throughout the world. So super exciting in that regard and it's something that we're really passionate about at Palo Alto Networks in terms of resilience, a few things, you know, one is visibility, so really having a, an understanding of in a real, as much of real time as possible, right? What's happening. And then it goes into how you, how can we decrease operational impact. So that's everything from network segmentation to wanna add the terms and phrases I like to use a lot is the win is really increasing the time it takes for the attackers to get their work done and decreasing the amount of time it takes for the defenders to get their work done, right? >>Yeah. I I call it increasing the denominator, right? And the ROI equation benefit over or value, right? Equals equals or benefit equals value over cost if you can increase the cost to go go elsewhere, right? Absolutely. And that's the, that's the game. Yeah. You mentioned Ukraine before, what have we learned from Ukraine? I, I remember I was talking to Robert Gates years ago, 2016 I think, and I was asking him, yeah, but don't we have the best cyber technology? Can't we attack? He said, we got the most to lose too. Yeah. And so what have we learned from, from Ukraine? >>Well, I, I think that's part of the key point there, right? Is you know, a great offense essentially can also be for us, you know, deterrent. So in that aspect we have as an, as a company and or excuse me, as a country, as a company as well, but then as partners throughout all parts of the world have really focused on increasing the intelligence sharing and specifically, you know, I mentioned Ukrainian cert. There are so many different agencies and other sorts throughout the world that are doing everything they can to share information to help protect human life there. And so what we've really been concerned with, with is, you know, what cyber warfare elements are going to be used there, not only how does that impact Ukraine, but how does it potentially spread out to other parts of the world critical infrastructure. So you've seen that, you know, I mentioned CS rrb, but cisa, right? >>CISA has done a tremendous job of continuously getting out information and doing everything they can to make sure that we are collaborating at a commercial level. You know, we are sharing information and intelligence more than ever before. So partners like Mania and CrowdStrike, our Intel teams are working together on a daily basis to make sure that we're able to protect not only our clients, but certainly if we've got any information relevant that we can share that as well. And I think if there's any silver lining to an otherwise very awful situation, I think the fact that is has accelerated intelligence sharing is really positive. >>I was gonna ask you about this cause I think, you know, 10 or so years ago, there was a lot of talk about that, but the industry, you know, kind of kept things to themselves, you know, a a actually tried to monetize some of that private data. So that's changing is what I'm hearing from you >>More so than ever more, you know, I've, I mentioned I've been in the field for 20 years. You know, it, it's tough when you have a commercial business that relies on, you know, information to, in order to pay people's salaries, right? I think that has changed quite a lot. We see the benefit of just that continuous sharing. There are, you know, so many more walls broken down between these commercial competitors, but also the work on the public private partnership side has really increased some of those relationships. Made it easier. And you know, I have to give a whole lot of credit and mention sisa, like the fact that during log four J, like they had GitHub repositories, they were using Slack, they were using Twitter. So the government has really started pushing forward with a lot of the newer leadership that's in place to say, Hey, we're gonna use tools and technology that works to share and disseminate information as quickly as we can. Right? That's fantastic. That's helping everybody. >>We knew that every industry, no, nobody's spared of this. But did you notice in the last couple of years, any industries in particular that are more vulnerable? Like I think of healthcare with personal health information or financial services, any industries kind of jump out as being more susceptible than others? >>So I think those two are always gonna be at the forefront, right? Financial services and healthcare. But what's been really top of mind is critical infrastructure, just making sure right? That our water, our power, our fuel, so many other parts of right, the ecosystem that go into making sure that, you know, we're keeping, you know, houses heated during the winter, for example, that people have fresh water. Those are extremely critical. And so that is really a massive area of focus for the industry right now. >>Can I come back to public-private partnerships? My question is relates to regulations because the public policy tends to be behind tech, the technology industry as an understatement. So when you take something like GDPR is the obvious example, but there are many, many others, data sovereignty, you can't move the data. Are are, are, is there tension between your desire as our desire as an industry to share data and government's desire to keep data private and restrict that data sharing? How is that playing out? How do you resolve that? >>Well I think there have been great strides right in each of those areas. So in terms of regulation when it comes to breaches there, you know, has been a tendency in the past to do victim shaming, right? And for organizations to not want to come forward because they're concerned about the monetary funds, right? I think there's been tremendous acceleration. You're seeing that everywhere from the fbi, from cisa, to really working very closely with organizations to, to have a true impact. So one example would be a ransomware attack that occurred. This was for a client of ours within the United States and we had a very close relationship with the FBI at that local field office and made a phone call. This was 7:00 AM Eastern time. And this was an organization that had this breach gone public, would've made worldwide news. There would've been a very big impact because it would've taken a lot of their systems offline. >>Within the 30 minutes that local FBI office was on site said, we just saw this piece of malware last week, we have a decryptor for it from another organization who shared it with us. Here you go. And within 60 minutes, every system was back up and running. Our teams were able to respond and get that disseminated quickly. So efforts like that, I think the government has made a tremendous amount of headway into improving relationships. Is there always gonna be some tension between, you know, competing, you know, organizations? Sure. But I think that we're doing a whole lot to progress it, >>But governments will make exceptions in that case. Especially for something as critical as the example that you just gave and be able to, you know, do a reach around, if you will, on, on onerous regulations that, that ne aren't helpful in that situation, but certainly do a lot of good in terms of protecting privacy. >>Well, and I think there used to be exceptions made typically only for national security elements, right? And now you're seeing that expanding much more so, which I think is also positive. Right. >>Last question for you as we are wrapping up time here. What can organizations really do to stay ahead of the curve when it comes to, to threat actors? We've got internal external threats. What can they really do to just be ahead of that curve? Is that possible? >>Well, it is now, it's not an easy task so I'm not gonna, you know, trivialize it. But I think that one, having relationships with right organizations in advance always a good thing. That's a, everything from certainly a commercial relationships, but also your peers, right? There's all kinds of fantastic industry spec specific information sharing organizations. I think the biggest thing that impacts is having education across your executive team and testing regularly, right? Having a plan in place, testing it. And it's not just the security pieces of it, right? As security responders, we live these attacks every day, but it's making sure that your general counsel and your head of operations and your CEO knows what to do. Your board of directors, do they know what to do when they receive a phone call from Bloomberg, for example? Are they supposed supposed to answer? Do your employees know that those kind of communications in advance and training can be really critical and make or break a difference in an attack. >>That's a great point about the testing but also the communication that it really needs to be company wide. Everyone at every level needs to know how to react. Wendy, it's been so great having, >>Wait one last question. Sure. Do you have a favorite superhero growing up? >>Ooh, it's gotta be Wonder Woman. Yeah, >>Yeah, okay. Yeah, so cuz I'm always curious, there's not a lot of women in, in security in cyber. How'd you get into it? And many cyber pros like wanna save the world? >>Yeah, no, that's a great question. So I joined the Air Force, you know, I, I was a special agent doing computer crime investigations and that was a great job. And I learned about that from, we had an alumni day and all these alumni came in from the university and they were in flight suits and combat gear. And there was one woman who had long blonde flowing hair and a black suit and high heels and she was carrying a gun. What did she do? Because that's what I wanted do. >>Awesome. Love it. We >>Blonde >>Wonder Woman. >>Exactly. Wonder Woman. Wendy, it's been so great having you on the program. We, we will definitely be following unit 42 and all the great stuff that you guys are doing. Keep up the good >>Work. Thanks so much Lisa. Thank >>You. Day our pleasure. For our guest and Dave Valante, I'm Lisa Martin, live in Las Vegas at MGM Grand for Palo Alto Ignite, 22. You're watching the Cube, the leader in live enterprise and emerging tech coverage.
SUMMARY :
The Cube presents Ignite 22, brought to you by Palo Alto One of the things that they have is unit Have you had a, it's, have you had a little bit more of that this holiday season? A lot of smishing going on. Wendy Whitmore is here, the SVP One of the things that I saw that you said in your keynote this morning or I love that you already highlighted Smishing, And of course we only hear about the big ones. the data and then threaten to post it on these leak sites, you know, I mean, if you get compromised like that, you really So the, you know, infrastructure, the servers, the systems that they're using to conduct these attacks from immediately stand one up, say, you know, hey it's, it's like an Amazon, you know, if you've ever had to return a or not so difficult is it for you to identify? One of the interesting things that I think is occurring, and I highlighted this this morning, days as much, or you know, because those are expensive to build, And, and is that an example of living off the land or is that B BRC four sort of a homegrown for Red teamers that are, you know, legitimately doing good work, but not great So you guys are getting a very broad picture of today's threat landscape. at Palo Alto Networks in terms of resilience, a few things, you know, can increase the cost to go go elsewhere, right? And so what we've really been concerned with, with is, you know, And I think if there's any silver lining to an otherwise very awful situation, I was gonna ask you about this cause I think, you know, 10 or so years ago, there was a lot of talk about that, but the industry, And you know, I have to give a whole lot of credit and mention sisa, like the fact that during log four But did you notice in the last couple of years, making sure that, you know, we're keeping, you know, houses heated during the winter, is the obvious example, but there are many, many others, data sovereignty, you can't move the data. of regulation when it comes to breaches there, you know, has been a tendency in the past to Is there always gonna be some tension between, you know, competing, you know, Especially for something as critical as the example that you just And now you're seeing that expanding much more so, which I think is also positive. Last question for you as we are wrapping up time here. Well, it is now, it's not an easy task so I'm not gonna, you know, That's a great point about the testing but also the communication that it really needs to be company wide. Wait one last question. Yeah, How'd you get into it? So I joined the Air Force, you know, I, I was a special agent doing computer We Wendy, it's been so great having you on the program. For our guest and Dave Valante, I'm Lisa Martin, live in Las Vegas at MGM
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Valante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Wendy | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Interpol | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Cyber Threat Alliance | ORGANIZATION | 0.99+ |
Bloomberg | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Cyber Safety Review Board | ORGANIZATION | 0.99+ |
Wendi Whitmore | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Wendy Whitmore | PERSON | 0.99+ |
20 years | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
J C D C | ORGANIZATION | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
one woman | QUANTITY | 0.99+ |
CISA | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
first day | QUANTITY | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
Robert Gates | PERSON | 0.99+ |
a year ago | DATE | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
XYZ | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
zero days | QUANTITY | 0.99+ |
over 200 organizations | QUANTITY | 0.99+ |
Unit 42 | ORGANIZATION | 0.99+ |
more than 35 organizations | QUANTITY | 0.99+ |
Mania | ORGANIZATION | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
Ignite | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
two weeks ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Microsoft | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
GDPR | TITLE | 0.98+ |
millions | QUANTITY | 0.98+ |
zero day | QUANTITY | 0.97+ |
2016 | DATE | 0.97+ |
MGM Grand | LOCATION | 0.97+ |
One | QUANTITY | 0.97+ |
Ukraine | LOCATION | 0.96+ |
one last question | QUANTITY | 0.96+ |
earlier this year | DATE | 0.95+ |
60 minutes | QUANTITY | 0.95+ |
Ukrainian | OTHER | 0.95+ |
unit 42 | OTHER | 0.95+ |
one organization | QUANTITY | 0.94+ |
fbi | ORGANIZATION | 0.93+ |
Intel | ORGANIZATION | 0.92+ |
Russia | ORGANIZATION | 0.92+ |
years ago | DATE | 0.92+ |
about 2,500 US dollars | QUANTITY | 0.92+ |
once every 11 seconds | QUANTITY | 0.9+ |
10 or so years ago | DATE | 0.9+ |
this morning | DATE | 0.89+ |
Garrett Lowell, Console Connect | AWS re:Invent 2022
(gentle music) >> Good afternoon, cloud community and welcome back to fabulous Las Vegas, Nevada. We are at AWS re:Invent. It's our fourth day, it's in the afternoon. We've got two more segments left. This is a serious marathon, but it's so exciting, it's kept my brain super curious. I'm Savannah Peterson, joined by Paul Gillan today. Paul- >> Hello Savannah. >> Are you as excited about how much we've learned this week as I am? >> I am. It's just taking, my mind is just bursting with all the new information I've absorbed over the last three days. Amazing talking to all these smart people. >> It has really been so cool. >> And learning about all the permutations that we to think about cloud but there are so many businesses that have been built around the cloud, around making the cloud easier to use, supporting cloud as our next guest can talk about, that there's this whole ecosystem element that we don't hear about so much, but it's very much the foundation of the people who are here. >> Speaking of ecosystem, our next guest, please welcome Garrett to the show. Runs Ecosystem for Console Connect. How you doing Eric? >> I'm doing very well. >> Savannah: Garrett, sorry. Excuse me. >> No worries. >> Few names on the show today. >> Garrett: I'm sure. >> I do know your name, my mouth just doesn't want to, just doesn't want to participate today. Have you had a great show so far? >> It's been fantastic. You know, the AWS re:Invent show has always been a fantastic event, so. >> You're a veteran. You're also a CUBE alumni, which is great. >> Yes. Thank you for having me back. Thank you for your time. I most appreciated it. >> Yeah. We love having you. It's going to be great. We'll, we'll try and do even better each time we have you on the show. So just in case those listening are unfamiliar with Console Connect, give us the pitch. >> Okay, so Console Connect is our software defined interconnect platform. We also provide what we call network as a service. This allows our customers and partners to take advantage of our global private network on a pay as you go basis. Scalable and flexible. When you're not using the service, you can turn it off. So you only pay as you go. >> What a novel idea. >> Yes, yes. In the past you would have to have a year or multi-year contract. So we're making our services match cloud offerings around the world. The platform itself is in more than a thousand data centers all around the globe. >> Savannah: Just a couple. >> Yes, just a few. We have about 45 terabits of network behind it. It's all on our private network, so none of it's accessible via the public internet and we have a meeting place which allows our existing customers and partners to reach out across the platform and share services. So one customer needs to subscribe to another customer services, they can do so right across the platform on a pay as to go basis. So it's been very exciting for us. It's been very fast, it seems to me, for the past five or six years that we've had the service. >> At what point in their cloud journey do customers typically realize they need a service like yours? If the bandwidth they're getting, their native bandwidth they're getting is insufficient. >> Yeah, and I think that's a great question. I think the customers themselves have seen a serious disconnect between their direct connections to the cloud service providers where the cloud service providers are billing by the minute. And a traditional telecommunications connection is built by the year or multi years and then you really lose control over your cloud connection when you forget about it, right? Because service is always up. The connection's always up. >> Yeah. >> And a lot of individuals in a company may have access to the cloud, that cloud service, provider service. And next thing you know, you have a runaway group of services that are running that you're paying for and you don't really realize it 'cause the connection's up, you've already paid the connection the cloud service is up, you've already paid for it. >> So how do businesses get better control over that spend or how do you help them? >> Yeah, so how we help them is our service is able to be turned off when it's not in use. So in the event that you don't need the service over a weekend or over a month, you can just turn it off and you're not paying for that. >> It sounds so simple but it actually is kind of revolutionary in the industry which is why I keep coming back to it. It's great. So we've heard a lot about hybrid cloud, multicloud. How is this increasing the complexity for customers? >> Well I think the complexity for customers has increased due to the fact that you have a multicloud requirement or you have multiple teams accessing your cloud service provider and there's no one really managing it from a central perspective. >> Savannah: They can definitely get siloed really easy. >> Yeah, and then it runs away from you and the next thing you know, you start to look at the monthly bills. But generally that happens on an annual basis. If any companies like mine, you're doing your annual reconciliation of your bills and that's when you notice something's not right. >> Yeah, definitely. I can actually see a Slack message I got once, multiple times probably. Is anyone using this service? Why does it cost us that? That's exactly what you're talking about. >> Do you integrate with the Amazon Management Console or is it a separate service? >> It's, our service is a separate service. We are APId in with AWS. You do have a single console from our platform to manage your connections to the cloud. And then once you are connected in, you would still need to use the AWS console to manage your service. They're very, let's just say no one is offering a remote console third party console yet for AWS or any other cloud service for that matter. >> How about for hybrid cloud is obviously the way, you know, the way the industry is going. How do you enable companies to manage their hybrid cloud environments more intelligently? >> Yeah, and that's another great question. We allow that, you know, we're a global company. We have global access around the world. It includes not only traditional telecommunication services but also includes satellite service as well as 5G and LTE capability to the platform. So in the event that someone is in a hyper cloud situation, they have a lot of capability to enable their services. >> You talked about network as a service, and I, we haven't had a chance to dig into it. So tell me a little bit more. How does, how can this help reduce egress charges? How, are people excited when they hear network as a service? Where are we off at on that hype curve? >> Yeah, I think it's low on the excitement scale. >> Savannah: Yeah. >> You know, network has become somewhat of a commodity in the world, like electricity or water, you know, for the most of the world. And so network as a service, what it has enabled is it has enabled the customers more control over what they're doing. 'Cause in the past, you would need weeks, if not months to get services installed. And then if you needed to make a change to that service to increase it or decrease it in accordance to your requirements, that might take a couple of days at the soonest and you know, the Console Connect platform now changed that down to a few minutes. So within a few minutes, you can enable services, turn it up, turn it down, scale up, scale down. >> Savannah: Talk about time to value. >> There's no equipment installation required? >> No, it is our private network and so there must be a direct connection to it. It's not available over the public internet. Generally, a customer will connect to us via a cross connect at a data center or they can bring in a local loop. Or our existing customers, we just flip a little switch, so to speak, software wise, and we give them access to the platform from their existing services. >> Do you work with co-location interconnects as well? >> Exactly, yes. And in fact, you can purchase those services across our platform with a lot of the co-location service providers. >> So if I'm already using a co-lo, I can deploy your service directly from that co-lo. >> Yes. Yes. >> That's very convenient. >> That is very convenient. (laughs) >> You also mentioned the ability interconnect between customers. So your customers can actually connect to each other and conduct transactions or integrate their applications. Talk about how that works. >> Yeah, so for instance, let's say you are a customer that's taking advantage of our platform and you find your network is under a DDoS attack. You can go into our meeting place, connect to one of our cloud service providers who specializes in DDoS mitigation, spin up a connection to them within a few minutes, and immediately, you can start taking care of your DDoS problem. And once it's taken care of, you turn it down. Now those types of services that are subscription based are via API into our platform so we can settle the bill for our customer on behalf of that service provider or the service provider themselves can bill that customer depending on how they want to set it up. So it's very flexible. >> It's really clever, too. I mean, especially in an instance like you just mentioned in that example, that's a moment of panic and high stress and high tension. The last thing you want to be thinking about is what's the right service provider? How quickly can I get this up and running? If I can just couple clicks, couple lines of code perhaps, or even just through the portal, be able to do it, it's pretty powerful. You mentioned that Console Connect, and I want to talk about this 'cause it's clear you care about the user experience, the community and Console Connect came out of LinkedIn DNA and you mentioned there's a social component to the platform as well. Can you tell me a little bit more about that? >> Yes, thank you for that. Yeah, so you can, as a customer or a partner, you can market directly to others on the platform using our meeting place. And you have the ability to reach out directly to people across the platform, send them a message. You have the ability to post articles, blog in one of our sections. And then the other one, you can actually go in and see all the latest activity in the platform. You can see who's the newest companies to join Console Connect. >> Savannah: Oh wow, cool. >> How do I reach out to them? And then that gives you the ability to begin either marketing across the platform or direct marketing to someone or directly just reach out and connect with them and say hey, we want to set a bilateral partnership with you. You know, how do we do that? So it's very flexible. >> Savannah: Yeah. >> Can I connect my systems to others? So if I want to plug into their eCommerce system so I can fulfill orders taken through their eCommerce system, can I enable that kind of connection? >> Oh, we're not there yet. It is coming, but we're just not there yet. >> What are the complexities? >> A lot of that is a trust issue. >> Yeah. >> You know, when you're dealing with across the globe, there are regulations in every location that must be adhered to. A lot of that is security and privacy related. And we must make sure that we are adhering to all the local regulations wherever we are. >> So it's not the technology, it's a problem, really. It's the- >> It's a regulatory issue, yeah. So the technology is there and I would say that the rest is following, it's just, it's slow when you're dealing with permits and with compliance. >> I also want to ask you, our notes here mention egress charges, which are a niggling pain point for a lot of customers. They have to pay to get their data of the cloud. How do you help with that problem? >> So how we help with this is first, we get a discount from our partners, our cloud partners, including AWS, and we pass that on to the customers. The other way is you have a full visibility of which connections you have live into those partners and you can manage that much easier through the single, I would say view. Of all of your connections. >> Savannah: Yeah. >> You can see all of your cloud connections right in the one view. And then you can do a little more digging and say are we using these, you know? Because a lot of times, you have projects that spin up and then someone forgets to spin them back down. So this helps give you that single view. But again, we get the discount that we are happy to pass on as well. >> Which is a win-win for everyone. I've using a tab analogy all show, we all we want it in one place, one tab, not all the tabs. >> Yes. I think network management and service management in any enterprise or partnership company is a real drain on resources. >> Oh yeah. And it's a waste of money. >> Garrett: Yeah. And if you're not managing correctly, yeah, you get the thing on the money. >> Are you an alternative to the direct connect services from the major cloud providers or are you a compliment to them? >> We're not competing with them, we're partnered. And so we don't see ourselves as an alternative. A lot of times, our customers come to us and they want to direct connect in a location where perhaps AWS isn't. >> Paul: Doesn't have a point of presence. >> Exactly. Right. We give them that flexibility of, yes you can directly connect here. And then the other approach that we like to take is we like to give our customers the choice of not only data center, but also region. So a lot of times egress charges are can be calculated across regions as well and that can really add up for our customer. Whereas if you have multiple egress locations, you're not transferring data across a region on the AWS platform or another cloud service platform. You can egress at that location and then take it across your own network or take it across our network and then your egress charges will be more reasonable. >> That's, it's convenient. Smart! You're making people's jobs optimized and easier as well as their stack and all the tools that they're using. It's fantastic. All right Garrett, we've got a new challenge here on theCUBE at re:Invent. >> Garrett: All right. >> It's probably different from the last time you were on theCUBE. We're looking for your 30 second hot take, your thought leadership moment. What's the biggest theme coming out of the show or for you as we look into 2023? >> Well, for in 30 seconds- >> Savannah: Yeah, casual, right? >> No pressure. >> Savannah: No big deal. >> No, so with Console Connect, you know, we are around the globe. I know that a lot of companies at AWS are, some are regional, some are global. And we have the ability to cover both. We can do either regional or global or a hybrid of those. We also have a hybrid approach on different types of services. And so the flexibility, scalability, reliability, and the lowered cost of egress with Console Connect is a win all around. You can't lose with it. >> I love it. You're meeting customers where they are. Garrett, it was fantastic to have you back on theCUBE. We look forward to your third cameo. >> Thank you very much. I appreciate your time. Thank you for having Console Connect on. >> Hey, absolutely. We look forward to continuing to watch and hopefully tell that story as well. And thank all of you for tuning in to day four of AWS's re:Invent coverage in Las Vegas, Nevada. I'm starting to forget my own name. I am with Paul Gilland. I'm Savannah Peterson. This is theCUBE. We are the leading source for high tech coverage. (gentle music)
SUMMARY :
it's in the afternoon. over the last three days. making the cloud easier to use, How you doing Eric? Savannah: Garrett, sorry. Have you had a great show so far? You know, the AWS re:Invent show You're a veteran. Thank you for your time. each time we have you on the show. So you only pay as you go. In the past you would have to have a year So one customer needs to subscribe If the bandwidth they're getting, and then you really lose control And next thing you know, So in the event that you revolutionary in the industry due to the fact that you Savannah: They can definitely and the next thing you know, I can actually see a And then once you are connected in, How do you enable So in the event that someone Where are we off at on that hype curve? on the excitement scale. 'Cause in the past, you would so to speak, software wise, And in fact, you can I can deploy your service That is very convenient. the ability interconnect and you find your network and you mentioned there's You have the ability to post articles, the ability to begin either It is coming, but we're A lot of that is a A lot of that is security So it's not the technology, So the technology is How do you help with that problem? and you can manage that much And then you can do a one tab, not all the tabs. and service management And it's a waste of money. yeah, you get the thing on the money. A lot of times, our customers come to us yes you can directly connect here. and all the tools that they're using. from the last time you were on theCUBE. No, so with Console Connect, you know, to have you back on theCUBE. Thank you for having Console Connect on. And thank all of you for tuning in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Garrett | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Paul Gilland | PERSON | 0.99+ |
Paul Gillan | PERSON | 0.99+ |
Garrett Lowell | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
30 second | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
fourth day | QUANTITY | 0.99+ |
Eric | PERSON | 0.99+ |
third cameo | QUANTITY | 0.99+ |
one tab | QUANTITY | 0.99+ |
Las Vegas, Nevada | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.98+ |
single | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
over a month | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
a year | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
more than a thousand data centers | QUANTITY | 0.96+ |
egress | ORGANIZATION | 0.96+ |
one customer | QUANTITY | 0.94+ |
single console | QUANTITY | 0.94+ |
one place | QUANTITY | 0.94+ |
about 45 terabits | QUANTITY | 0.93+ |
each time | QUANTITY | 0.92+ |
couple clicks | QUANTITY | 0.92+ |
this week | DATE | 0.92+ |
CUBE | ORGANIZATION | 0.92+ |
Console Connect | TITLE | 0.91+ |
Slack | ORGANIZATION | 0.91+ |
Console Connect | TITLE | 0.91+ |
Savannah | LOCATION | 0.89+ |
two more segments | QUANTITY | 0.88+ |
couple lines | QUANTITY | 0.87+ |
day four | QUANTITY | 0.86+ |
couple | QUANTITY | 0.84+ |
Few names | QUANTITY | 0.84+ |
over a weekend | QUANTITY | 0.81+ |
Console | TITLE | 0.78+ |
six years | QUANTITY | 0.76+ |
Connect | COMMERCIAL_ITEM | 0.73+ |
Console Connect | COMMERCIAL_ITEM | 0.72+ |
re | EVENT | 0.72+ |
one view | QUANTITY | 0.72+ |
re:Invent show | EVENT | 0.7+ |
LinkedIn DNA | ORGANIZATION | 0.69+ |
five | QUANTITY | 0.64+ |
last three days | DATE | 0.62+ |
Management Console | COMMERCIAL_ITEM | 0.62+ |
past | DATE | 0.61+ |
re:Invent | EVENT | 0.58+ |
minutes | QUANTITY | 0.56+ |
Lena Smart, MongoDB | AWS re:Invent 2022
(bright music) >> Hello everyone and welcome back to AWS re:Invent, here in wonderful Las Vegas, Nevada. We're theCUBE. I am Savannah Peterson. Joined with my co-host, Dave Vellante. Day four, you look great. Your voice has come back somehow. >> Yeah, a little bit. I don't know how. I took last night off. You guys, I know, were out partying all night, but - >> I don't know what you're talking about. (Dave laughing) >> Well, you were celebrating John's birthday. John Furrier's birthday today. >> Yes, happy birthday John! >> He's on his way to England. >> Yeah. >> To attend his nephew's wedding. Awesome family. And so good luck, John. I hope you feel better, he's got a little cold. >> I know, good luck to the newlyweds. I love this. I know we're both really excited for our next guest, so I'm going to bring out, Lena Smart from MongoDB. Thank you so much for being here. >> Thank you for having me. >> How's the show going for you? >> Good. It's been a long week. And I just, not much voice left, so. >> We'll be gentle on you. >> I'll give you what's left of it. >> All right, we'll take that. >> Okay. >> You had a fireside chat, at the show? >> Lena: I did. >> Can you tell us a little bit about that? >> So we were talking about the Rise, The developer is a platform. In this massive theater. I thought it would be like an intimate, you know, fireside chat. I keep believing them when they say to me come and do these talks, it'll be intimate. And you turn up and there's a stage and a theater and it's like, oh my god. But it was really interesting. It was well attended. Got some really good questions at the end as well. Lots of follow up, which was interesting. And it was really just about, you know, how we've brought together this developer platform that's got our integrated services. It's just what developers want, it gives them time to innovate and disrupt, rather than worry about the minutia of management. >> Savannah: Do the cool stuff. >> Exactly. >> Yeah, so you know Lena, it's funny that you're saying that oh wow, the lights came on and it was this big thing. When when we were at re:Inforced, Lena was on stage and it was so funny, Lena, you were self deprecating like making jokes about the audience. >> Savannah: (indistinct) >> It was hilarious. And so, but it was really endearing to the audience and so we were like - >> Lena: It was terrifying. >> You got huge props for that, I'll tell you. >> Absolutely terrifying. Because they told me I wouldn't see anyone. Because we did the rehearsal the day before, and they were like, it's just going to be like - >> Sometimes it just looks like blackness out there. >> Yeah, yeah. It wasn't, they lied. I could see eyeballs. It was terrifying. >> Would you rather know that going in though? Or is it better to be, is ignorance bliss in that moment? >> Ignorance is bliss. >> Yeah, yeah yeah. >> Good call Savannah, right? Yeah, just go. >> The older I get, the more I'm just, I'm on the ignorance is bliss train. I just, I don't need to know anything that's going to hurt my soul. >> Exactly. >> One of the things that you mentioned, and this has actually been a really frequent theme here on the show this week, is you said that this has been a transformative year for developers. >> Lena: Yeah. >> What did you mean by that? >> So I think developers are starting to come to the fore, if you like, the fore. And I'm not in any way being deprecating about developers 'cause I love them. >> Savannah: I think everyone here does. >> I was married to one, I live with one now. It's like, they follow me everywhere. They don't. But, I think they, this is my opinion obviously but I think that we're seeing more and more the value that developers bring to the table. They're not just code geeks anymore. They're not just code monkeys, you know, churning out lines and lines of code. Some of the most interesting discussions I've had this week have been with developers. And that's why I'm so pleased that our developer data platform is going to give these folks back time, so that they can go and innovate. And do super interesting things and do the next big thing. It was interesting, I was talking to Mary, our comms person earlier and she had said that Dave I guess, my boss, was on your show - >> Dave: Yeah, he was over here last night. >> Yeah. And he was saying that two thirds of the companies that had been mentioned so far, within the whole gamut of this conference use MongoDB. And so take that, extrapolate that, of all the developers >> Wow. >> who are there. I know, isn't that awesome? >> That's awesome. Congrats on that, that's like - >> Did I hear that right now? >> I know, I just had that moment. >> I know she just told me, I'm like, really? That's - >> That's so cool. >> 'Cause the first thing I thought of was then, oh my god, how many developers are we reaching then? 'Cause they're the ones. I mean, it's kind of interesting. So my job has kind of grown from, over the years, being the security geek in the back room that nobody talks to, to avoiding me in the lift, to I've got a seat at the table now. We meet with the board. And I think that I can see that that's where the developer mindset is moving towards. It's like, give us the right tools and we'll change your world. >> And let the human capital go back to doing the fun stuff and not just the maintenance stuff. >> And, but then you say that, you can't have everything automated. I get that automation is also the buzzword of the week. And I get that, trust me. Someone has to write the code to do the automation. >> Savannah: Right. >> So, so yeah, definitely give these people back time, so that they can work on ML, AI, choose your buzzword. You know, by giving people things like queriable encryption for example, you're going to free up a whole bunch of head space. They don't have to worry about their data being, you know harvested from memory or harvested while at rest or in motion. And it's like, okay, I don't have to worry about that now, let me go do something fun. >> How about the role of the developer as it relates to SecOps, right? They're being asked to do a lot. You and I talked about this at re:Inforce. You seem to have a pretty good handle on it. Like a lot of companies I think are struggling with it. I mean, the other thing you said said to me is you don't have a lack of talent at Mongo, right? 'Cause you're Mongo. But a lot of companies do. But a lot of the developers, you know we were just talking about this earlier with Capgemini, the developer metrics or the application development team's metrics might not be aligned with the CSO's metrics. How, what are you seeing there? What, how do you deal with it within Mongo? What do you advise your customers? >> So in terms of internal, I work very closely with our development group. So I work with Tara Hernandez, who's our new VP of developer productivity. And she and her team are very much interested in making developers more productive. That's her job. And so we get together because sometimes security can definitely be seen as a blocker. You know, funnily enough, I actually had a Slack that I had to respond to three seconds before I come on here. And it was like, help, we need some help getting this application through procurement, because blah, blah, blah. And it's weird the kind of change, the shift in mindset. Whereas before they might have gone to procurement or HR or someone to ask for this. Now they're coming to the CSO. 'Cause they know if I say yes, it'll go through. >> Talk about social engineering. >> Exactly. >> You were talking about - >> But turn it around though. If I say no, you know, I don't like to say no. I prefer to be the CSO that says yes, but. And so that's what we've done. We've definitely got that culture of ask, we'll tell you the risks, and then you can go away and be innovative and do what you need to do. And we basically do the same with our customers. Here's what you can do. Our application is secure out of the box. Here's how we can help you make it even more, you know, streamlined or bespoke to what you need. >> So mobile was a big inflection point, you know, I dunno, it seems like forever ago. >> 2007. >> 2007. Yeah, iPhone came out in 2007. >> You remember your first iPhone? >> Dave: Yeah. >> Yeah? Same. >> Yeah. It was pretty awesome, actually. >> Yeah, I do too. >> Yeah, I was on the train to Boston going up to see some friends at MIT on the consortium that I worked with. And I had, it was the wee one, 'member? But you thought it was massive. >> Oh, it felt - >> It felt big. And I remember I was sitting on the train to Boston it was like the Estella and there was these people, these two women sitting beside me. And they were all like glam, like you and unlike me. >> Dave: That's awesome. >> And they, you could see them like nudging each other. And I'm being like, I'm just sitting like this. >> You're chilling. >> Like please look at my phone, come on just look at it. Ask me about it. And eventually I'm like - >> You're baiting them. >> nonchalantly laid it on the table. And you know, I'm like, and they're like, is that an iPhone? And I'm like, yeah, you want to see it? >> I thought you'd never ask. >> I know. And I really played with it. And I showed them all the cool stuff, and they're like, oh we're going to buy iPhones. And so I should have probably worked for Apple, but I didn't. >> I was going to say, where was your referral kickback on that? Especially - >> It was a little like Tesla, right? When you first, we first saw Tesla, it was Ray Wong, you know, Ray? From Pasadena? >> It really was a moment and going from the Blackberry keyboard to that - >> He's like want to see my car? And I'm like oh yeah sure, what's the big deal? >> Yeah, then you see it and you're like, ooh. >> Yeah, that really was such a pivotal moment. >> Anyway, so we lost a track, 2007. >> Yeah, what were we talking about? 2007 mobile. >> Mobile. >> Key inflection point, is where you got us here. Thank you. >> I gotchu Dave, I gotchu. >> Bring us back here. My mind needs help right now. Day four. Okay, so - >> We're all getting here on day four, we're - >> I'm socially engineering you to end this, so I can go to bed and die quietly. That's what me and Mary are, we're counting down the minutes. >> Holy. >> That's so sick. >> You're breaking my heart right now. I love it. I'm with you, sis, I'm with you. >> So I dunno where I was, really where I was going with this, but, okay, there's - >> 2007. Three things happened. >> Another inflection point. Okay yeah, tell us what happened. But no, tell us that, but then - >> AWS, clones, 2006. >> Well 2006, 2007. Right, okay. >> 2007, the iPhone, the world blew up. So you've already got this platform ready to take all this data. >> Dave: Right. >> You've got this little slab of gorgeousness called the iPhone, ready to give you all that data. And then MongoDB pops up, it's like, woo-hoo. But what we could offer was, I mean back then was awesome, but it was, we knew that we would have to iterate and grow and grow and grow. So that was kind of the three things that came together in 2007. >> Yeah, and then Cloud came in big time, and now you've got this platform. So what's the next inflection point do you think? >> Oh... >> Good question, Dave. >> Don't even ask me that. >> I mean, is it Edge? Is it IOT? Is there another disruptor out there? >> I think it's going to be artificial intelligence. >> Dave: Is it AI? >> I mean I don't know enough about it to talk about it, to any level, so don't ask me any questions about it. >> This is like one of those ignorance is bliss moments. It feels right. >> Yeah. >> Well, does it scare you, from a security perspective? Or? >> Great question, Dave. >> Yeah, it scares me more from a humanity standpoint. Like - >> More than social scared you? 'Cause social was so benign when it started. >> Oh it was - >> You're like, oh - I remember, >> It was like a yearbook. I was on the Estella and we were - >> Shout out to Amtrak there. >> I was with, we were starting basically a wikibond, it was an open source. >> Yeah, yeah. >> Kind of, you know, technology community. And we saw these and we were like enamored of Facebook. And there were these two young kids on the train, and we were at 'em, we were picking the brain. Do you like Facebook? "I love Facebook." They're like "oh, Facebook's unbelievable." Now, kids today, "I hate Facebook," right? So, but social at the beginning it was kind of, like I say, benign and now everybody's like - >> Savannah: We didn't know what we were getting into. >> Right. >> I know. >> Exactly. >> Can you imagine if you could have seen into the future 20 years ago? Well first of all, we'd have all bought Facebook and Apple stock. >> Savannah: Right. >> And Tesla stock. But apart from, but yeah apart from that. >> Okay, so what about Quantum? Does that scare you at all? >> I think the only thing that scares me about Quantum is we have all this security in place today. And I'm not an expert in Quantum, but we have all this security in place that's securing what we have today. And my worry is, in 10 years, is it still going to be secure? 'Cause we're still going to be using that data in some way, shape, or form. And my question is to the quantum geniuses out there, what do we do in 10 years like to retrofit the stuff? >> Dave: Like a Y2K moment? >> Kind of. Although I think Y2K is coming in 2038, isn't it? When the Linux date flips. I'll be off the grid by then, I'll be living in Scotland. >> Somebody else's problem. >> Somebody else's problem. I'll be with the sheep in Glasgow, in Scotland. >> Y2K was a boondoggle for tech, right? >> What a farce. I mean, that whole - >> I worked in the power industry in Y2K. That was a nightmare. >> Dave: Oh I bet. >> Savannah: Oh my God. >> Yeah, 'cause we just assumed that the world was going to stop and there been no power, and we had nuclear power plants. And it's like holy moly. Yeah. >> More than moly. >> I was going to say, you did a good job holding that other word in. >> I think I was going to, in case my mom hears this. >> I grew up near Diablo Canyon in, in California. So you were, I mean we were legitimately worried that that exactly was going to happen. And what about the waste? And yeah it was chaos. We've covered a lot. >> Well, what does worry you? Like, it is culture? Is it - >> Why are you trying to freak her out? >> No, no, because it's a CSO, trying to get inside the CSO's head. >> You don't think I have enough to worry about? You want to keep piling on? >> Well if it's not Quantum, you know? Maybe it's spiders or like - >> Oh but I like spiders, well spiders are okay. I don't like bridges, that's my biggest fear. Bridges. >> Seriously? >> And I had to drive over the Tappan Zee bridge, which is one of the longest, for 17 years, every day, twice. The last time I drove over it, I was crying my heart out, and happy as anything. >> Stay out of Oakland. >> I've never driven over it since. Stay out of where? >> Stay out of Oakland. >> I'm staying out of anywhere that's got lots of water. 'Cause it'll have bridges. >> Savannah: Well it's good we're here in the desert. >> Exactly. So what scares me? Bridges, there you go. >> Yeah, right. What? >> Well wait a minute. So if I'm bridging technology, is that the scary stuff? >> Oh God, that was not - >> Was it really bad? >> It was really bad. >> Wow. Wow, the puns. >> There's a lot of seems in those bridges. >> It is lit on theCUBE A floor, we are all struggling. I'm curious because I've seen, your team is all over the place here on the show, of course. Your booth has been packed the whole time. >> Lena: Yes. >> The fingerprint. Talk to me about your shirt. >> So, this was designed by my team in house. It is the most wanted swag in the company, because only my security people wear it. So, we make it like, yeah, you could maybe have one, if this turns out well. >> I feel like we're on the right track. >> Dave: If it turns out well. >> Yeah, I just love it. It's so, it's just brilliant. I mean, it's the leaf, it's a fingerprint. It's just brilliant. >> That's why I wanted to call it out. You know, you see a lot of shirts, a lot of swag shirts. Some are really unfortunately sad, or not funny, >> They are. >> or they're just trying too hard. Now there's like, with this one, I thought oh I bet that's clever. >> Lena: It is very cool. Yes, I love it. >> I saw a good one yesterday. >> Yeah? >> We fix shit, 'member? >> Oh yeah, yeah. >> That was pretty good. >> I like when they're >> That's a pretty good one. >> just straightforward, like that, yeah yeah. >> But the only thing with this is when you're say in front of a green screen, you look as though you've got no tummy. >> A portal through your body. >> And so, when we did our first - >> That's a really good point, actually. >> Yeah, it's like the black hole to nothingless. And I'm like wow, that's my soul. >> I was just going to say, I don't want to see my soul like that. I don't want to know. >> But we had to do like, it was just when the pandemic first started, so we had to do our big presentation live announcement from home. And so they shipped us all this camera equipment for home and thank God my partner knows how that works, so he set it all up. And then he had me test with a green screen, and he's like, you have no tummy. I'm like, what the hell are you talking about? He's like, come and see. It's like this, I dunno what it was. So I had to actually go upstairs and felt tip with a magic marker and make it black. >> Wow. >> So that was why I did for two hours on a Friday, yeah. >> Couldn't think of another alternative, huh? >> Well no, 'cause I'm myopic when it comes to marketing and I knew I had to keep the tshirt on, and I just did that. >> Yeah. >> In hindsight, yes I could have worn an "I Fix Shit" tshirt, but I don't think my husband would've been very happy. I secure shit? >> There you go, yeah. >> There you go. >> Over to you, Savannah. >> I was going to say, I got acquainted, I don't know if I can say this, but I'm going to say it 'cause we're here right now. I got acquainted with theCUBE, wearing a shirt that said "Unfuck Kubernetes," 'cause it was a marketing campaign that I was running for one of my clients at Kim Con last year. >> That's so good. >> Yeah, so - >> Oh my God. I'll give you one of these if you get me one of those. >> I can, we can do a swapskee. We can absolutely. >> We need a few edits on this film, on the file. >> Lena: Okay, this is nothing - >> We're fallin' off the wheel. Okay, on that note, I'm going to bring us to our challenge that we discussed, before we got started on this really diverse discussion that we have had in the last 15 minutes. We've covered everything from felt tip markers to nuclear power plants. >> To the darkness of my soul. >> To the darkness of all of our souls. >> All of our souls, yes. >> Which is perhaps a little too accurate, especially at this stage in the conference. You've obviously seen a lot Lena, and you've been rockin' it, I know John was in your suite up here, at at at the Venetian. What's your 30 second hot take? Most important story, coming out of the show or for you all at Mongo this year? >> Genuinely, it was when I learned that two-thirds of the customers that had been mentioned, here, are MongoDB customers. And that just exploded in my head. 'Cause now I'm thinking of all the numbers and the metrics and how we can use that. And I just think it's amazing, so. >> Yeah, congratulations on that. That's awesome. >> Yeah, I thought it was amazing. >> And it makes sense actually, 'cause Mongo so easy to use. We were talking about Tengen. >> We knew you when, I feel that's our like, we - >> Yeah, but it's true. And so, Mongo was just really easy to use. And people are like, ah, it doesn't scale. It's like, turns out it actually does scale. >> Lena: Turns out, it scales pretty well. >> Well Lena, without question, this is my favorite conversation of the show so far. >> Thank you. >> Thank you so much for joining us. >> Thank you very much for having me. >> Dave: Great to see you. >> It's always a pleasure. >> Dave: Thanks Lena. >> Thank you. >> And thank you all, tuning in live, for tolerating wherever we take these conversations. >> Dave: Whatever that was. >> I bet you weren't ready for this one, folks. We're at AWS re:Invent in Las Vegas, Nevada. With Dave Vellante, I'm Savannah Peterson. You're washing theCUBE, the leader for high tech coverage.
SUMMARY :
I am Savannah Peterson. I don't know how. I don't know Well, you were I hope you feel better, I know, good luck to the newlyweds. And I just, not much voice left, so. And it was really just about, you know, Yeah, so you know Lena, it's funny And so, but it was really endearing for that, I'll tell you. I wouldn't see anyone. Sometimes it just looks I could see eyeballs. Yeah, just go. I just, I don't need to know anything One of the things that you mentioned, to the fore, if you like, the fore. I was married to one, Dave: Yeah, he was And he was saying that two I know, isn't that Congrats on that, that's like - And I think that I can And let the human capital go back And I get that, trust me. being, you know harvested from memory But a lot of the developers, you know And it was like, help, we need some help I don't like to say no. I dunno, it seems like forever ago. Yeah? actually. And I had, it was the wee one, 'member? And I remember I was sitting And they, you could see And eventually I'm like - And I'm like, yeah, you want to see it? And I really played with it. Yeah, then you see Yeah, that really was Yeah, what were we talking about? is where you got us here. I gotchu Dave, Okay, so - you to end this, so I can I love it. Three things happened. But no, tell us that, but then - Well 2006, 2007. 2007, the iPhone, the world blew up. I mean back then was awesome, point do you think? I think it's going to I mean I don't know enough about it This is like one of Yeah, it scares me more 'Cause social was so I was on the Estella and we were - I was with, we were starting basically And we saw these and we were what we were getting into. Can you imagine if you could And Tesla stock. And my question is to the Although I think Y2K is I'll be with the sheep in Glasgow, I mean, that whole - I worked in the power industry in Y2K. assumed that the world I was going to say, you I think I was going to, that that exactly was going to happen. No, no, because it's a CSO, I don't like bridges, And I had to drive over Stay out of where? I'm staying out of anywhere Savannah: Well it's good Bridges, there you go. Yeah, right. the scary stuff? Wow, the puns. There's a lot of seems is all over the place here Talk to me about your shirt. So, we make it like, yeah, you could I mean, it's the leaf, it's a fingerprint. You know, you see a lot of I thought oh I bet that's clever. Lena: It is very cool. That's a pretty like that, yeah yeah. But the only thing with this is That's a really good point, the black hole to nothingless. I was just going to say, I don't and he's like, you have no tummy. So that was why I did for and I knew I had to keep the I secure shit? I was going to say, I got acquainted, I'll give you one of these I can, we can do a swapskee. on this film, on the file. Okay, on that note, I'm going to bring us I know John was in your suite And I just think it's amazing, so. Yeah, congratulations on that. it was amazing. And it makes sense actually, And so, Mongo was just really easy to use. of the show so far. And thank you all, tuning in live, I bet you weren't
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lena | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tara Hernandez | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Mary | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Glasgow | LOCATION | 0.99+ |
Scotland | LOCATION | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Oakland | LOCATION | 0.99+ |
Diablo Canyon | LOCATION | 0.99+ |
2006 | DATE | 0.99+ |
California | LOCATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
Pasadena | LOCATION | 0.99+ |
England | LOCATION | 0.99+ |
17 years | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Ray Wong | PERSON | 0.99+ |
2038 | DATE | 0.99+ |
Three things | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ray | PERSON | 0.99+ |
Blackberry | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Lena Smart | PERSON | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
two young kids | QUANTITY | 0.99+ |
yesterday | DATE | 0.98+ |
twice | QUANTITY | 0.98+ |
Las Vegas, Nevada | LOCATION | 0.98+ |
two women | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Anurag Gupta, Shoreline io | AWS re:Invent 2022 - Global Startup Program
(gentle music) >> Now welcome back to theCUBE, everyone. I'm John Walls, and once again, we're glad to have you here for AWS re:Invent 22. Our coverage continues here on Thursday, day three, of what has been a jam-packed week of tech and AWS, of course, has been the great host for this. It's now a pleasure to welcome in Anurag Gupta, who is the founder and CEO of Shoreline, joining us here as part of the AWS Global Showcase Startup Program, and Anurag, good to see you, sir. Thanks for joining us. >> Thank you so much. >> Tell us about Shoreline, about what you're up to. >> So we're a DevOps company. We're really focused on repairing issues. If you think about it, there are a ton DevOps companies and we all went to the cloud in order to gain faster innovation and by and large check. Then all of the things involved in getting things into production, artifact generation, testing, configuration management, deployment, also by and large, automated. Now pity the poor SRE who's getting the deluge of stuff on them, every week, every two days, sometimes multiple times a day, and it's complicated, right? Kubernetes, VMs, lots of services, multiple clouds, sometimes, and you know, they need to know a little bit about everything. And you know what, there are a ton of companies that actually help you with what we call Day-2 Ops. It's just that most of them help you with observability, telling you what's gone wrong, or incident management, routing something to someone. But you know, back when I was at AWS, I never got really that excited about one more dashboard to look at or one more like better ticket routing. What used to really excite me was having some issue extinguished forever. And if you think about it, like the first five minutes of an incident are detecting and routing. The next hour, two hours, is some human being going in and fixing it, so that feels like the big opportunity to reduce, so hopefully we can talk a little bit about different ways that one can do that. >> What about Day-2 Ops? Just tell me about how you define that. >> So I basically define it as once the software goes into a production, just making sure things stay up and are healthy and you're resilient and you don't get errors and all of those sorts of things because everything breaks sooner or later, you know, to a greater or lesser degree. >> Especially that SRE you're talking about, right? >> Yeah. >> So let's go back to that scenario. Yeah, you pity the poor soul because they do have to be a little expert in everything. >> Exactly. >> And that's really challenging and we all know that, that's really hard. So how do you go about trying to lighten that burden, then? >> So when you look at the numbers, about somewhere between 40% to even 95% of the alarms that fire, the alerts that fire, are false positives and that's crazy. Why is someone waking up just to deal with? >> It's a lot of wasted time, isn't it? >> A lot of wasted time. And you know, you're also training someone into what I call ClickOps, just to go in and click the button and resolve it and you don't actually know if it was the false positive or it's the rare real positive, and so that's a challenge, right? And so the first thing to do is to figure out where the false positives are. Like, let's say Datadog tells you that CPU is high and alarms. Is that a good thing or a bad thing? It's hard for them to tell, right? But you have to then introspect it into something precise like, oh, CPU is high, but response times are standard and the request rate is high. Okay, that's a good thing. I'm going to ignore this. Or CPU is high, but it kind of resolves itself, so I'm going to not wake anybody up. Or CPU is high and oh, it's the darn JVM starting to garbage collect again, so let me go and take a heap dump and give that to my dev team and then bounce the JVM and you know, without waking anybody up, or CPU is high, I have no idea what's going on. Now it's time to wake somebody up. You know, what you want to use humans for is the ability to think about novel stuff, not to do repetitive stuff, so that's the first step. The second step is, about 40% of what remains is repetitive and straightforward. So like a disk is full, I'd better clean up the garbage on the disk or maybe grow the disk. People shouldn't wake up to deal to grow a disk. And so for that, what you want to do is just have those sorts of things get automated away. One of the nice things about Shoreline is, is that we take the experience in what we build for one company, and if they're willing, provide it to everybody else. Our belief is, a central tenant is, if someone somewhere fixes something, everyone everywhere should gain the benefit because we all sit on the same three clouds, we all sit on the same set of database infrastructure, et cetera. We should all get the same benefits. Why do we have to scar our own backs rather than benefiting from somebody else's scar tissue, so that's the second thing. The third thing is, okay, let's say it's not straightforward, not something I've seen before, then in that case, what often happens is on average like eight people get involved. You know, it initially goes to L1 support or L1 ops and, but they don't necessarily know because, as you say, the environment's complex. And so, you know, they go into Slack and they say, "At here, can somebody help me with this?" And those things take a much longer time, so wouldn't it be better that if your best SRE is able to say, "Hey, check these 20 things and then run these actions." We could convert that into like a Jupyter Notebook where you could say the incident got fired I pre-populated all the diagnostics, and then I tell people very precisely, "If you see this, run this, et cetera." Like a wiki, but actually something you could run right in this product. And then, you know, last piece of the puzzle, the smaller piece, is sometimes new things happen and when something new happens, what you want is sort of the central tech of Shoreline, which is parallel distributed, real-time debugging. And so the ability to do, you know, execute a command across your fleet rather than individual boxes so that you can say something like, "I'm hearing that my credit card app is slow. For everything tagged as being part of my credit card app, please run for everything that's running over 90% CPU, please run a top command." And so, you know, then you can run in the same time on one host as you can on 30,000 and that helps a lot. So that's the core of what we do. People use us for all sorts of things, also preventative maintenance, you know, just the proactive regular things. You know, like your car, you do an oil change, well, you know, you need to rotate your certs, certificates. You need to make sure that, you know, there isn't drift in your configurations, there isn't drift in your software. There's also security elements to it, right? You want to make sure that you aren't getting weird inbound/outbound traffic across to ports you don't expect to be open. You don't want to have these processes running, you know, maybe something's bad. And so that's all the kind of weird anomaly detection that's easy to do if you run things in a distributed parallel way across everything. That's super hard to do if you have to go and Whac-A-Mole across one box after the next. >> Well, which leads to a question just in terms of setting priorities then, which is what you're talking about helping companies establish priorities, this hierarchy of level one warning, level two, level three, level four. Sounds like that should be a basic, right? But you're saying that's not, that's not really happening in the enterprise. >> Well, you know, I would say that if you hadn't automated deployments, you should do that first. If you haven't automated your testing pipeline, shame on you, you should do that like a year ago. But now it's time to help people in production because you've done that other work and people are suffering. You know, the crazy thing about the cloud is, is that companies spend about three times more on the human beings to operate their cloud infrastructure as on the cloud infrastructure itself. I've yet to hear anybody say that their cloud bill is too low, you know, so, you know, there's a clearer savings also available. And you know, back when I was at AWS, obviously I had to keep the lights on too, but you know, I had to do that, but it's kind of a tax on my engineers and I'd really spend, prefer to spend the head count on innovation, on doing things that delight my customers. You never delight your customers by keeping the lights on, you just avoid irritating them by turning 'em off, right? >> So why are companies so fixed in on spending so much time on manually repairing things and not looking for these kinds of little, much more elegant solution and cost-efficient, time-saving, so on so forth. >> Yeah, I think there just hasn't been very much in this space as yet because it's a hard, hard problem to solve. You know, automation's a little bit scary and that's the reality of it and the way you make it less scary is by proving it out, by doing the simple things first, like reducing the alert fatigue, you know, that's easy. You know, providing notebooks to people so that they can click things and do things in a straightforward way. That's pretty easy. The full automation, that's kind of the North Star, that's what we aspire to do. But you know, people get there over time and one of our customers had 700 instances of this particular incident solved for them last week. You imagine how many human beings would've been doing it otherwise, you know? >> Right. >> That's just one thing, you know? >> How many did it take the build a pyramid? How many decades did that take, right? You had an announcement this week. I don't think we've talked about that. >> No, yeah, so we just announced Incident Insights, which is a free product that lets people plug into initially PagerDuty and pretty soon the Opsgenie ServiceNow, et cetera. And what you can do is, is you give us an API key read-only and we will suck your PagerDuty data out. We apply some lightweight ML unsupervised learning, and in a couple of minutes, we categorize all of your incidents so that you can understand which are the ones that happen most often and are getting resolved really quickly. That's ClickOps, right? Those alarms shouldn't fire. Which are the ones that involve a lot of people? Those are good candidates to build a notebook. Which are the ones that happen again and again and again? Those are good candidates for automation. And so, I think one of the challenges people have is, is that they don't actually know what their teams are doing and so this is intended to provide them that visibility. One of our very first customers was doing the beta test for us on it. He used to tell us he had about 100 tickets, incidents a week. You know, he brought this tool in and he had 2,100 last week and was all, you know, like these false alarms, so while he's giving us- >> That was eye opening for him to see that, sure. >> And why he's, you know, looking at it, you know, he's just like filing Jiras to say, "Oh, change this threshold, cancel this alarm forever." You know, all of that kind of stuff. Before you get to do the fancy work, you got to clean your room before you get to do anything else, right? >> Right, right, dinner before dessert, basically. >> There you go. >> Hey, thanks for the insights on this and again the name of the new product, by the way, is... >> Incident Insights. >> Incident Insights. >> Totally free. >> Free. >> Yeah, it takes a couple of minutes to set up. Go to the website, Shoreline.io/insight and you can be up and running in a couple of minutes. >> Outstanding, again, the company is Shoreline. This is Anurag Gupta, and thank you for being with us. We appreciate it. >> Appreciate it, thank you. >> Glad to have to here on theCUBE. Back with more from AWA re:Invent 22. You're watching theCUBE, the leader in high-tech coverage. (gentle music)
SUMMARY :
of the AWS Global Showcase about what you're up to. But you know, back when I was at AWS, Just tell me about how you define that. and you don't get errors Yeah, you pity the poor soul So how do you go about trying So when you look at the numbers, And so the ability to do, you know, in the enterprise. And you know, back when I was at AWS, and not looking for these kinds of little, and the way you make it less the build a pyramid? and was all, you know, for him to see that, sure. And why he's, you know, before dessert, basically. and again the name of the new and you can be up and running thank you for being with us. Glad to have to here on theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Walls | PERSON | 0.99+ |
Shoreline | ORGANIZATION | 0.99+ |
Anurag Gupta | PERSON | 0.99+ |
Thursday | DATE | 0.99+ |
2,100 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
700 instances | QUANTITY | 0.99+ |
Anurag | PERSON | 0.99+ |
20 things | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
Jiras | PERSON | 0.99+ |
second thing | QUANTITY | 0.99+ |
30,000 | QUANTITY | 0.99+ |
two hours | QUANTITY | 0.99+ |
eight people | QUANTITY | 0.99+ |
second step | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
one box | QUANTITY | 0.99+ |
about 100 tickets | QUANTITY | 0.98+ |
first five minutes | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
one company | QUANTITY | 0.97+ |
a year ago | DATE | 0.96+ |
first thing | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
Shoreline.io/insight | OTHER | 0.96+ |
SRE | ORGANIZATION | 0.95+ |
about three times | QUANTITY | 0.95+ |
three clouds | QUANTITY | 0.95+ |
Jupyter | ORGANIZATION | 0.94+ |
Datadog | ORGANIZATION | 0.94+ |
over 90% CPU | QUANTITY | 0.93+ |
one host | QUANTITY | 0.93+ |
Global Showcase Startup Program | EVENT | 0.92+ |
about 40% | QUANTITY | 0.91+ |
level four | QUANTITY | 0.91+ |
a week | QUANTITY | 0.9+ |
first customers | QUANTITY | 0.9+ |
one more | QUANTITY | 0.89+ |
every two days | QUANTITY | 0.86+ |
level three | QUANTITY | 0.86+ |
level one | QUANTITY | 0.85+ |
Day | QUANTITY | 0.85+ |
PagerDuty | ORGANIZATION | 0.84+ |
level two | QUANTITY | 0.81+ |
re:Invent 2022 - Global Startup Program | TITLE | 0.8+ |
Shoreline io | ORGANIZATION | 0.78+ |
Incident | ORGANIZATION | 0.73+ |
ClickOps | ORGANIZATION | 0.71+ |
Day | TITLE | 0.7+ |
times a day | QUANTITY | 0.69+ |
theCUBE | ORGANIZATION | 0.67+ |
next hour | DATE | 0.66+ |
2 | TITLE | 0.65+ |
theCUBE | TITLE | 0.63+ |
Kubernetes | TITLE | 0.62+ |
day three | QUANTITY | 0.62+ |
every | QUANTITY | 0.6+ |
ton of companies | QUANTITY | 0.6+ |
Invent 22 | TITLE | 0.59+ |
Star | LOCATION | 0.59+ |
Opsgenie | ORGANIZATION | 0.57+ |
AWA | ORGANIZATION | 0.57+ |
Invent | EVENT | 0.53+ |
Slack | TITLE | 0.52+ |
PagerDuty | TITLE | 0.48+ |
22 | TITLE | 0.46+ |
2 | QUANTITY | 0.43+ |
L1 | ORGANIZATION | 0.33+ |
ServiceNow | COMMERCIAL_ITEM | 0.32+ |
re | EVENT | 0.27+ |
Scott Castle, Sisense | AWS re:Invent 2022
>>Good morning fellow nerds and welcome back to AWS Reinvent. We are live from the show floor here in Las Vegas, Nevada. My name is Savannah Peterson, joined with my fabulous co-host John Furrier. Day two keynotes are rolling. >>Yeah. What do you thinking this? This is the day where everything comes, so the core gets popped off the bottle, all the announcements start flowing out tomorrow. You hear machine learning from swee lot more in depth around AI probably. And then developers with Verner Vos, the CTO who wrote the seminal paper in in early two thousands around web service that becames. So again, just another great year of next level cloud. Big discussion of data in the keynote bulk of the time was talking about data and business intelligence, business transformation easier. Is that what people want? They want the easy button and we're gonna talk a lot about that in this segment. I'm really looking forward to this interview. >>Easy button. We all want the >>Easy, we want the easy button. >>I love that you brought up champagne. It really feels like a champagne moment for the AWS community as a whole. Being here on the floor feels a bit like the before times. I don't want to jinx it. Our next guest, Scott Castle, from Si Sense. Thank you so much for joining us. How are you feeling? How's the show for you going so far? Oh, >>This is exciting. It's really great to see the changes that are coming in aws. It's great to see the, the excitement and the activity around how we can do so much more with data, with compute, with visualization, with reporting. It's fun. >>It is very fun. I just got a note. I think you have the coolest last name of anyone we've had on the show so far, castle. Oh, thank you. I'm here for it. I'm sure no one's ever said that before, but I'm so just in case our audience isn't familiar, tell us about >>Soy Sense is an embedded analytics platform. So we're used to take the queries and the analysis that you can power off of Aurora and Redshift and everything else and bring it to the end user in the applications they already know how to use. So it's all about embedding insights into tools. >>Embedded has been a, a real theme. Nobody wants to, it's I, I keep using the analogy of multiple tabs. Nobody wants to have to leave where they are. They want it all to come in there. Yep. Now this space is older than I think everyone at this table bis been around since 1958. Yep. How do you see Siente playing a role in the evolution there of we're in a different generation of analytics? >>Yeah, I mean, BI started, as you said, 58 with Peter Lu's paper that he wrote for IBM kind of get became popular in the late eighties and early nineties. And that was Gen one bi, that was Cognos and Business Objects and Lotus 1 23 think like green and black screen days. And the way things worked back then is if you ran a business and you wanted to get insights about that business, you went to it with a big check in your hand and said, Hey, can I have a report? And they'd come back and here's a report. And it wasn't quite right. You'd go back and cycle, cycle, cycle and eventually you'd get something. And it wasn't great. It wasn't all that accurate, but it's what we had. And then that whole thing changed in about two, 2004 when self-service BI became a thing. And the whole idea was instead of going to it with a big check in your hand, how about you make your own charts? >>And that was totally transformative. Everybody started doing this and it was great. And it was all built on semantic modeling and having very fast databases and data warehouses. Here's the problem, the tools to get to those insights needed to serve both business users like you and me and also power users who could do a lot more complex analysis and transformation. And as the tools got more complicated, the barrier to entry for everyday users got higher and higher and higher to the point where now you look, look at Gartner and Forester and IDC this year. They're all reporting in the same statistic. Between 10 and 20% of knowledge workers have learned business intelligence and everybody else is just waiting in line for a data analyst or a BI analyst to get a report for them. And that's why the focus on embedded is suddenly showing up so strong because little startups have been putting analytics into their products. People are seeing, oh my, this doesn't have to be hard. It can be easy, it can be intuitive, it can be native. Well why don't I have that for my whole business? So suddenly there's a lot of focus on how do we embed analytics seamlessly? How do we embed the investments people make in machine learning in data science? How do we bring those back to the users who can actually operationalize that? Yeah. And that's what Tysons does. Yeah. >>Yeah. It's interesting. Savannah, you know, data processing used to be what the IT department used to be called back in the day data processing. Now data processing is what everyone wants to do. There's a ton of data we got, we saw the keynote this morning at Adam Lesky. There was almost a standing of vision, big applause for his announcement around ML powered forecasting with Quick Site Cube. My point is people want automation. They want to have this embedded semantic layer in where they are not having all the process of ETL or all the muck that goes on with aligning the data. All this like a lot of stuff that goes on. How do you make it easier? >>Well, to be honest, I, I would argue that they don't want that. I think they, they think they want that, cuz that feels easier. But what users actually want is they want the insight, right? When they are about to make a decision. If you have a, you have an ML powered forecast, Andy Sense has had that built in for years, now you have an ML powered forecast. You don't need it two weeks before or a week after in a report somewhere. You need it when you're about to decide do I hire more salespeople or do I put a hundred grand into a marketing program? It's putting that insight at the point of decision that's important. And you don't wanna be waiting to dig through a lot of infrastructure to find it. You just want it when you need it. What's >>The alternative from a time standpoint? So real time insight, which is what you're saying. Yep. What's the alternative? If they don't have that, what's >>The alternative? Is what we are currently seeing in the market. You hire a bunch of BI analysts and data analysts to do the work for you and you hire enough that your business users can ask questions and get answers in a timely fashion. And by the way, if you're paying attention, there's not enough data analysts in the whole world to do that. Good luck. I am >>Time to get it. I really empathize with when I, I used to work for a 3D printing startup and I can, I have just, I mean, I would call it PTSD flashbacks of standing behind our BI guy with my list of queries and things that I wanted to learn more about our e-commerce platform in our, in our marketplace and community. And it would take weeks and I mean this was only in 2012. We're not talking 1958 here. We're talking, we're talking, well, a decade in, in startup years is, is a hundred years in the rest of the world life. But I think it's really interesting. So talk to us a little bit about infused and composable analytics. Sure. And how does this relate to embedded? Yeah. >>So embedded analytics for a long time was I want to take a dashboard I built in a BI environment. I wanna lift it and shift it into some other application so it's close to the user and that is the right direction to go. But going back to that statistic about how, hey, 10 to 20% of users know how to do something with that dashboard. Well how do you reach the rest of users? Yeah. When you think about breaking that up and making it more personalized so that instead of getting a dashboard embedded in a tool, you get individual insights, you get data visualizations, you get controls, maybe it's not even actually a visualization at all. Maybe it's just a query result that influences the ordering of a list. So like if you're a csm, you have a list of accounts in your book of business, you wanna rank those by who's priorities the most likely to churn. >>Yeah. You get that. How do you get that most likely to churn? You get it from your BI system. So how, but then the question is, how do I insert that back into the application that CSM is using? So that's what we talk about when we talk about Infusion. And SI started the infusion term about two years ago and now it's being used everywhere. We see it in marketing from Click and Tableau and from Looker just recently did a whole launch on infusion. The idea is you break this up into very small digestible pieces. You put those pieces into user experiences where they're relevant and when you need them. And to do that, you need a set of APIs, SDKs, to program it. But you also need a lot of very solid building blocks so that you're not building this from scratch, you're, you're assembling it from big pieces. >>And so what we do aty sense is we've got machine learning built in. We have an LQ built in. We have a whole bunch of AI powered features, including a knowledge graph that helps users find what else they need to know. And we, we provide those to our customers as building blocks so that they can put those into their own products, make them look and feel native and get that experience. In fact, one of the things that was most interesting this last couple of couple of quarters is that we built a technology demo. We integrated SI sensee with Office 365 with Google apps for business with Slack and MS teams. We literally just threw an Nlq box into Excel and now users can go in and say, Hey, which of my sales people in the northwest region are on track to meet their quota? And they just get the table back in Excel. They can build charts of it and PowerPoint. And then when they go to their q do their QBR next week or week after that, they just hit refresh to get live data. It makes it so much more digestible. And that's the whole point of infusion. It's bigger than just, yeah. The iframe based embedding or the JavaScript embedding we used to talk about four or five years >>Ago. APIs are very key. You brought that up. That's gonna be more of the integration piece. How does embedable and composable work as more people start getting on board? It's kind of like a Yeah. A flywheel. Yes. What, how do you guys see that progression? Cause everyone's copying you. We see that, but this is a, this means it's standard. People want this. Yeah. What's next? What's the, what's that next flywheel benefit that you guys coming out with >>Composability, fundamentally, if you read the Gartner analysis, right, they, when they talk about composable, they're talking about building pre-built analytics pieces in different business units for, for different purposes. And being able to plug those together. Think of like containers and services that can, that can talk to each other. You have a composition platform that can pull it into a presentation layer. Well, the presentation layer is where I focus. And so the, so for us, composable means I'm gonna have formulas and queries and widgets and charts and everything else that my, that my end users are gonna wanna say almost minority report style. If I'm not dating myself with that, I can put this card here, I can put that chart here. I can set these filters here and I get my own personalized view. But based on all the investments my organization's made in data and governance and quality so that all that infrastructure is supporting me without me worrying much about it. >>Well that's productivity on the user side. Talk about the software angle development. Yeah. Is your low code, no code? Is there coding involved? APIs are certainly the connective tissue. What's the impact to Yeah, the >>Developer. Oh. So if you were working on a traditional legacy BI platform, it's virtually impossible because this is an architectural thing that you have to be able to do. Every single tool that can make a chart has an API to embed that chart somewhere. But that's not the point. You need the life cycle automation to create models, to modify models, to create new dashboards and charts and queries on the fly. And be able to manage the whole life cycle of that. So that in your composable application, when you say, well I want chart and I want it to go here and I want it to do this and I want it to be filtered this way you can interact with the underlying platform. And most importantly, when you want to use big pieces like, Hey, I wanna forecast revenue for the next six months. You don't want it popping down into Python and writing that yourself. >>You wanna be able to say, okay, here's my forecasting algorithm. Here are the inputs, here's the dimensions, and then go and just put it somewhere for me. And so that's what you get withy sense. And there aren't any other analytics platforms that were built to do that. We were built that way because of our architecture. We're an API first product. But more importantly, most of the legacy BI tools are legacy. They're coming from that desktop single user, self-service, BI environment. And it's a small use case for them to go embedding. And so composable is kind of out of reach without a complete rebuild. Right? But with SI senses, because our bread and butter has always been embedding, it's all architected to be API first. It's integrated for software developers with gi, but it also has all those low code and no code capabilities for business users to do the minority report style thing. And it's assemble endless components into a workable digital workspace application. >>Talk about the strategy with aws. You're here at the ecosystem, you're in the ecosystem, you're leading product and they have a strategy. We know their strategy, they have some stuff, but then the ecosystem goes faster and ends up making a better product in most of the cases. If you compare, I know they'll take me to school on that, but I, that's pretty much what we report on. Mongo's doing a great job. They have databases. So you kind of see this balance. How are you guys playing in the ecosystem? What's the, what's the feedback? What's it like? What's going on? >>AWS is actually really our best partner. And the reason why is because AWS has been clear for many, many years. They build componentry, they build services, they build infrastructure, they build Redshift, they build all these different things, but they need, they need vendors to pull it all together into something usable. And fundamentally, that's what Cient does. I mean, we didn't invent sequel, right? We didn't invent jackal or dle. These are not, these are underlying analytics technologies, but we're taking the bricks out of the briefcase. We're assembling it into something that users can actually deploy for their use cases. And so for us, AWS is perfect because they focus on the hard bits. The the underlying technologies we assemble those make them usable for customers. And we get the distribution. And of course AWS loves that. Cause it drives more compute and it drives more, more consumption. >>How much do they pay you to say that >>Keynote, >>That was a wonderful pitch. That's >>Absolutely, we always say, hey, they got a lot of, they got a lot of great goodness in the cloud, but they're not always the best at the solutions and that they're trying to bring out, and you guys are making these solutions for customers. Yeah. That resonates with what they got with Amazon. For >>Example, we, last year we did a, a technology demo with Comprehend where we put comprehend inside of a semantic model and we would compile it and then send it back to Redshift. And it takes comprehend, which is a very cool service, but you kind of gotta be a coder to use it. >>I've been hear a lot of hype about the semantic layer. What is, what is going on with that >>Semantec layer is what connects the actual data, the tables in your database with how they're connected and what they mean so that a user like you or me who's saying I wanna bar chart with revenue over time can just work with revenue and time. And the semantic layer translates between what we did and what the database knows >>About. So it speaks English and then they converts it to data language. It's >>Exactly >>Right. >>Yeah. It's facilitating the exchange of information. And, and I love this. So I like that you actually talked about it in the beginning, the knowledge map and helping people figure out what they might not know. Yeah. I, I am not a bi analyst by trade and I, I don't always know what's possible to know. Yeah. And I think it's really great that you're doing that education piece. I'm sure, especially working with AWS companies, depending on their scale, that's gotta be a big part of it. How much is the community play a role in your product development? >>It's huge because I'll tell you, one of the challenges in embedding is someone who sees an amazing experience in outreach or in seismic. And to say, I want that. And I want it to be exactly the way my product is built, but I don't wanna learn a lot. And so you, what you want do is you want to have a community of people who have already built things who can help lead the way. And our community, we launched a new version of the SES community in early 2022 and we've seen a 450% growth in the c in that community. And we've gone from an average of one response, >>450%. I just wanna put a little exclamation point on that. Yeah, yeah. That's awesome. We, >>We've tripled our organic activity. So now if you post this Tysons community, it used to be, you'd get one response maybe from us, maybe from from a customer. Now it's up to three. And it's continuing to trend up. So we're, it's >>Amazing how much people are willing to help each other. If you just get in the platform, >>Do it. It's great. I mean, business is so >>Competitive. I think it's time for the, it's time. I think it's time. Instagram challenge. The reels on John. So we have a new thing. We're gonna run by you. Okay. We just call it the bumper sticker for reinvent. Instead of calling it the Instagram reels. If we're gonna do an Instagram reel for 30 seconds, what would be your take on what's going on this year at Reinvent? What you guys are doing? What's the most important story that you would share with folks on Instagram? >>You know, I think it's really what, what's been interesting to me is the, the story with Redshift composable, sorry. No, composable, Redshift Serverless. Yeah. One of the things I've been >>Seeing, we know you're thinking about composable a lot. Yes. Right? It's, it's just, it's in there, it's in your mouth. Yeah. >>So the fact that Redshift Serverless is now kind becoming the defacto standard, it changes something for, for my customers. Cuz one of the challenges with Redshift that I've seen in, in production is if as people use it more, you gotta get more boxes. You have to manage that. The fact that serverless is now available, it's, it's the default means it now people are just seeing Redshift as a very fast, very responsive repository. And that plays right into the story I'm telling cuz I'm telling them it's not that hard to put some analysis on top of things. So for me it's, it's a, maybe it's a narrow Instagram reel, but it's an >>Important one. Yeah. And that makes it better for you because you get to embed that. Yeah. And you get access to better data. Faster data. Yeah. Higher quality, relevant, updated. >>Yep. Awesome. As it goes into that 80% of knowledge workers, they have a consumer great expectation of experience. They're expecting that five ms response time. They're not waiting 2, 3, 4, 5, 10 seconds. They're not trained on theola expectations. And so it's, it matters a lot. >>Final question for you. Five years out from now, if things progress the way they're going with more innovation around data, this front end being very usable, semantic layer kicks in, you got the Lambda and you got serverless kind of coming in, helping out along the way. What's the experience gonna look like for a user? What's it in your mind's eye? What's that user look like? What's their experience? >>I, I think it shifts almost every role in a business towards being a quantitative one. Talking about, Hey, this is what I saw. This is my hypothesis and this is what came out of it. So here's what we should do next. I, I'm really excited to see that sort of scientific method move into more functions in the business. Cuz for decades it's been the domain of a few people like me doing strategy, but now I'm seeing it in CSMs, in support people and sales engineers and line engineers. That's gonna be a big shift. Awesome. >>Thank >>You Scott. Thank you so much. This has been a fantastic session. We wish you the best at si sense. John, always pleasure to share the, the stage with you. Thank you to everybody who's attuning in, tell us your thoughts. We're always eager to hear what, what features have got you most excited. And as you know, we will be live here from Las Vegas at reinvent from the show floor 10 to six all week except for Friday. We'll give you Friday off with John Furrier. My name's Savannah Peterson. We're the cube, the the, the leader in high tech coverage.
SUMMARY :
We are live from the show floor here in Las Vegas, Nevada. Big discussion of data in the keynote bulk of the time was We all want the How's the show for you going so far? the excitement and the activity around how we can do so much more with data, I think you have the coolest last name of anyone we've had on the show so far, queries and the analysis that you can power off of Aurora and Redshift and everything else and How do you see Siente playing a role in the evolution there of we're in a different generation And the way things worked back then is if you ran a business and you wanted to get insights about that business, the tools to get to those insights needed to serve both business users like you and me the muck that goes on with aligning the data. And you don't wanna be waiting to dig through a lot of infrastructure to find it. What's the alternative? and data analysts to do the work for you and you hire enough that your business users can ask questions And how does this relate to embedded? Maybe it's just a query result that influences the ordering of a list. And SI started the infusion term And that's the whole point of infusion. That's gonna be more of the integration piece. And being able to plug those together. What's the impact to Yeah, the And most importantly, when you want to use big pieces like, Hey, I wanna forecast revenue for And so that's what you get withy sense. How are you guys playing in the ecosystem? And the reason why is because AWS has been clear for That was a wonderful pitch. the solutions and that they're trying to bring out, and you guys are making these solutions for customers. which is a very cool service, but you kind of gotta be a coder to use it. I've been hear a lot of hype about the semantic layer. And the semantic layer translates between It's So I like that you actually talked about it in And I want it to be exactly the way my product is built, but I don't wanna I just wanna put a little exclamation point on that. And it's continuing to trend up. If you just get in the platform, I mean, business is so What's the most important story that you would share with One of the things I've been Seeing, we know you're thinking about composable a lot. right into the story I'm telling cuz I'm telling them it's not that hard to put some analysis on top And you get access to better data. And so it's, it matters a lot. What's the experience gonna look like for a user? see that sort of scientific method move into more functions in the business. And as you know, we will be live here from Las Vegas at reinvent from the show floor
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Peter Lu | PERSON | 0.99+ |
Friday | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
450% | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Office 365 | TITLE | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
1958 | DATE | 0.99+ |
PowerPoint | TITLE | 0.99+ |
20% | QUANTITY | 0.99+ |
Forester | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Verner Vos | PERSON | 0.99+ |
early 2022 | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
five ms | QUANTITY | 0.99+ |
Las Vegas, Nevada | LOCATION | 0.99+ |
this year | DATE | 0.99+ |
first product | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.98+ |
one response | QUANTITY | 0.98+ |
late eighties | DATE | 0.98+ |
Five years | QUANTITY | 0.98+ |
2 | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
Savannah | PERSON | 0.98+ |
Scott Castle | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
Sisense | PERSON | 0.97+ |
5 | QUANTITY | 0.97+ |
English | OTHER | 0.96+ |
Click and Tableau | ORGANIZATION | 0.96+ |
Andy Sense | PERSON | 0.96+ |
Looker | ORGANIZATION | 0.96+ |
two weeks | DATE | 0.96+ |
next week | DATE | 0.96+ |
early nineties | DATE | 0.95+ |
ORGANIZATION | 0.95+ | |
serverless | TITLE | 0.94+ |
AWS Reinvent | ORGANIZATION | 0.94+ |
Mongo | ORGANIZATION | 0.93+ |
single | QUANTITY | 0.93+ |
Aurora | TITLE | 0.92+ |
Lotus 1 23 | TITLE | 0.92+ |
One | QUANTITY | 0.92+ |
JavaScript | TITLE | 0.92+ |
SES | ORGANIZATION | 0.92+ |
next six months | DATE | 0.91+ |
MS | ORGANIZATION | 0.91+ |
five years | QUANTITY | 0.89+ |
six | QUANTITY | 0.89+ |
a week | DATE | 0.89+ |
Soy Sense | TITLE | 0.89+ |
hundred grand | QUANTITY | 0.88+ |
Redshift | TITLE | 0.88+ |
Adam Lesky | PERSON | 0.88+ |
Day two keynotes | QUANTITY | 0.87+ |
floor 10 | QUANTITY | 0.86+ |
two thousands | QUANTITY | 0.85+ |
Redshift Serverless | TITLE | 0.85+ |
both business | QUANTITY | 0.84+ |
3 | QUANTITY | 0.84+ |
Eleanor Dorfman, Retool | AWS re:Invent 2022
(gentle music) >> Good morning from Las Vegas. It's theCUBE live at AWS Reinvent 2022 with tons of thousands of people today. Really kicks off the event. Big keynote that I think is probably just wrapping up. Lisa Martin here with Dave Vellante. Dave, this is going to be an action packed week on theCUBE no doubt. We talked with so many different companies. Every company's a software company these days but we're also seeing a lot of companies leaving software that can help them operate more efficiently in the background. >> Yeah, well some things haven't changed at Reinvent. A lot of people here, you know, back to 2019 highs and I think we exceeded those two hour keynotes. Peter DeSantis last night talking about new Graviton instances and then Adam Selipsky doing the typical two hour keynote. But what was different he was a lot more poetic than we used to hear from Andy Jassy, right? He was talking about the universe as an analogy for data. >> I loved that. >> Talked about ocean exploration as for the security piece and then exploring into the Antarctic for, you know, better chips, you know? So yeah, I think he did a good job there. I think a lot of people might not love it but I thought it was very well done. >> I thought so too. We're having kicking off a great day of live content for you all day today. We've got Eleanor Dorfman joining us, the sales leader at Retool. Eleanor, welcome to theCUBE. It's great to have you. >> Thank you so much for having me. >> So let's talk a little bit about Retool. I was looking on your LinkedIn page. I love the tagline, build custom internal tools best. >> Eleanor: Yep. >> Talk to us a little bit about the company you recently raised, series C two. Give us the backstory. >> Yeah, so the company was founded in 2017 by two co-founders who are best friends from college. They actually set out to build a FinTech company, a payments company. And as they were building that, they needed to build a ton of custom operations software that goes with that. If you're going to be managing people's money, you need to be able to do refunds. You need to be able to look up accounts, you need to be able to detect fraud, you need to do know your customer operations. And as they were building the sort of operations software that supports the business, they realized that there were patterns to all of it and that the same components were used at and again. And had the insight that that was actually probably a better direction to go in than recreating Venmo, which was I think the original idea. And that actually this is a problem every company has because every company needs operations engineering and operations software to run their business. And so they pivoted and started building Retool which is a platform for building custom operations software or internal tools. >> Dave: Good pivot. >> In hindsight, actually probably in the moment as well, was a good pivot. >> But you know, when you talk about some of those things, refunds, fraud, you know, KYC, you know, you think of operations software, you think of it as just internal, but all those things are customer facing. >> Eleanor: Yep. >> Right so, are we seeing as sort of this new era? Is that a trend that you guys, your founders saw that hey, these internal operations can be pointed at customers to support what, a better customer service, maybe even generate revenue, subscriptions? >> I think it's a direction we're actually heading now but we're just starting to scratch the surface of that. The focus for the last five years has very much been on this operations software and sort of changing the economics of developing it and making it easy and fast to productize workflows that were previously being done in spreadsheets or hacky workarounds and make it easier for companies to prioritize those so they can run their business more efficiently. >> And where are you having your customer conversations these days? Thinking of operations software in the background, but to Dave's point, it ends up being part of the customer experience. So where are you having your customer conversations, target audience, who's that persona? >> Mainly developers. So we're working almost exclusively with developer teams who have backlogs and backlogs of internal tools requests to build that sales teams are building manual forecasts. Support teams are in 19 different tools. Their supply chain teams are using seven different spreadsheets to do demand forecasting or freight forwarding or things like that. But they've never been able to be prioritized to the top of the list because customer facing software, revenue generating software, always takes prioritization. And in this economic environment, which is challenging for many companies right now, it's important to be able to do more with less and maximize the productivity especially of high value employees like engineers and developers. >> So what would you say the biggest business outcomes are? If the developer is really the focus, productivity is the- >> Productivity. It's for both, I would say. Developer productivity and being able to maximize your sort of R and D and maximize the productivity of your engineers and take away some of the very boring parts of the job. But, so I would say developer productivity, but then also the tools and the software that they're building are very powerful for end users. So I would say efficiency and productivity across your business. >> Across the business. >> I mean historically, you know, operations is where we focused IT and code. How much of the code out there is dedicated to sort of operations versus that customer facing? >> So I think it would actually be, it's kind of surprising. We have run a few surveys on this sort of, we call them the state of engineering time, and focusing on what developers are spending their time on. And a third of all code that is being written today is actually for this internal operations software. >> Interesting. And do you guys have news at the show? Are you announcing anything interesting or? >> Yeah, so our focus historically, you sort of gave away with one of your early questions, but our focus has always been on this operations, this building web applications on building UIs on top of databases and APIs and doing that incredibly fast and being able to do it all in one place and integrate with as any data source that you need. We abstract away access authentication deployment and you build applications for your internal teams. But recently, we've launched two new products. We're actually supporting more external use cases and more customer facing use cases as well as automating CRON jobs, ETL jobs alerting with the new retail workflows product. So we're expanding the scope of operations software from web applications to also internal operations like CRON jobs and ETL jobs. >> Explain that. Explain the scourge of CRON jobs to the audience. >> Yeah, so operations software businesses run on operations software. It's interesting, zooming out, it's actually something you said earlier as well. Every company has become a software company. So when you think about software, you tend to think about here. Very cool software that people are selling. And software that you use as a consumer. But Coca-Cola for example, has hundreds of software engineers that are building tools to make the business run for forecasting, for demand gen, for their warehouse distribution and monitoring inventory. And there's two types of that. There's the applications that they build and then the operations that have to run behind that. Maybe a workflow that is detecting how many bottles of Coca-Cola are in every warehouse and sending a notification to the right person when they're out or when they, a refill is very strong, but you know when you need a refill. So it does that, it takes those tasks, those jobs that run in the background and enables you to customize them and build them very rapidly in a code first way. >> So some of the notes that you guys provided say that there's over 500 million software apps that are going to be built in the next few years alone. That's tremendous. How much of that is operation software? >> I mean I think at least a third of that, if not more. To the point where every company is being forced to maximize their resources today and operational efficiency is the way to do that. And so it can become a competitive advantage when you can take the things that humans are doing in spreadsheets with 19 open tabs and automate that. That saves hours a day. That's a significant, significant driver of efficiency and productivity for a business >> It does, and there's direct correlation to the customer experience. The use experience. >> Almost certainly. When you think about building support tooling, I was web chat, chatting on the with Gogo wifi support on my flight over here and they asked for my order number and I sent it and they looked up my account and that's a custom piece of software they were using to look up the account, create a new account for me, and restore my second wifi purchase. And so when you think about it, you're actually, even just as a consumer, interacting with this custom software on the day time. And that's because that's what companies use to have a good customer experience and have an efficient business. >> And what's the relationship with AWS? You guys started, I think you said 2017, so you obviously started in the cloud, but I'm particularly interested in from a seller perspective, what that's like. Working with Amazon, how's that affected your business? >> Yeah, I mean so we're built on AWS, so we're customers and big fans. And obviously like from a selling perspective, we have a ton of integrations with AWS so we're able to integrate directly into all the different AWS products that people are using for databases, for data warehouses, for deployment configurations, for monitoring, for security, for observability, we can basically fit into your existing AWS stack in order to make it as seamless integration with your software so that building in Retool is just as seamless as building it on your own, just much, much faster. >> So in your world, I know you wanted to but, in your world is it more analytics? is it more transactional, sort of? Is it both? >> It's all of the above. And I think what's, over Thanksgiving, I was asked a lot to explain what Retool did with people who were like, we just got our first iPhone. And so I tried to explain with an example because I have yet to stumble on the perfect metaphor. But the example I typically use is DoorDash is a customer of ours. And for about three years, and three years ago, they had a problem. They had no way of turning off delivery in certain zip codes during storms. Which as someone who has had orders canceled during a storm, it's an incredibly frustrating experience. And the way it worked is that they had operation team members manually submitting requests to engineers to say there's a storm in this zip code and an engineer would run a manual task. This didn't scale with Doordash as they were opening in new countries all over the world that have very different weather patterns. And so they looked, they had one, they were sort of confronted with a choice. They could buy a piece of software out of the box. There is not a startup that does this yet. They could build it by hand, which would mean scoping the requirements designing a UI, building authentication, building access controls, putting it into a, putting it into a sprint, assigning an engineer. This would've taken months and months. And then it would take just as long to iterate on it or they could use Retool. So they used Retool, they built this app, it saved, I think they were saying up to two years of engineering time for this one application because of how quickly it was. And since then they've built, I think 50 or 60 more automating away other tasks like that that were one out of spreadsheets or in Jira or in Slack notifications or an email saying, "Hey, could you please do this thing? There's a storm." And so now they use us for dozens and dozens of operations like that. >> A lot of automation and of course a lot of customer delight on the other end of the spectrum as you were talking about. It is frustrating when you don't get that order but it's also the company needs to be able to have the the tools in place to automate to be able to react quickly. >> Eleanor: Exactly. >> Because the consumers are, as we know, quite demanding. I wanted to ask you, I mentioned the tagline in the beginning, build custom internal tools fast. You just gave us a great example of DoorDash. Huge business outcomes they're achieving but how fast are we talking? How fast can the average developer build these internal tools? >> Well, we've been doing a fun thing at our booth where we ask people what a problem is and build a tool for them while we're there. So for something lightweight, you can build it in 10 minutes. For something a little more complex, it can take up to a few weeks depending on what the requirements are. But we all have people who will be on a call with us introducing them to our software for the first time and they'll start telling us about their problems and in the background we'll be building it and then at the end we're like, is this what you meant? And they're like, we'd like to add that to our cart. And obviously, it's a platform so you can't do that. But we've been able to build applications on a call before while people are telling us what they need. >> So fast is fast. >> I would say very fast, yeah. >> Now how do you price? >> Right now, we have a couple different plans. We actually have a motion where you can sign up on our website and get started. So we have a free plan, we've got plans for startups, and then we've got plans all the way up to the enterprise. >> Right. And that's a subscription pricing kind of thing? >> Subscription model, yes. >> So I get a subscription to the platform and then what? Is there also a consumption component? >> Exactly. So there's a consumption component as well. So there's access to the platform and then you can build as many applications as you need. Or build as many workflows. >> When you're having customer conversations with prospects, what do you define as Retool's superpowers? You're the sales leader. What are some of those key superpowers that you think really differentiate Retool? >> I do think, well, the sales team first and foremost, but that's not a fair answer. I would say that people are a bit differentiator though. We have a lot of very talented people who are have a ton of domain expertise and care a ton about the customer outcomes, which I do actually think is a little more rare than it should be. But we're one of the only products out there that's built with a developer first mindset, a varied code first mindset, built to integrate with your software development life cycle but also built with the security and robustness that enterprise companies require. So it's able to take an enterprise grade software with a developer first approach while still having a ton of agility and nimbleness which is what people are really craving as the earth keeps moving around them. So I would say that's something that really sets us apart from the field. >> And then talk about some of the what developers are saying, some of the feedback, some of the responses, and maybe even, I know we're just on day one of the show, but any feedback from the booth so far? >> We've had a few people swing by our booth and show us their Retool apps, which is incredibly cool. That's my absolute favorite thing is encountering a Retool application in the wild which happens a lot more than I would've thought, which I shouldn't say, but is incredibly rewarding. But people love it. It's the reason I joined is I'd never heard someone have a product that customers talked about the way they talk about Retool because Retool enables them to do things. For some folks who use it, it enables them to do something they previously couldn't do. So it gives them super powers in their job and to triple their impact. And then for others, it just makes things so fast. And it's a very delightful experience. It's very much built by developers, for developers. And so it's built with a developer's first mindset. And so I think it's quite fun to build in Retool. Even I can build and Retool, though not well. And then it's extremely impactful and people are able to really impact their business and delight their coworkers which I think can be really meaningful. >> Absolutely. Delighting the coworkers directly relates to delighting the customers. >> Eleanor: Exactly. >> Those customer experience, employee experience, they're like this. >> Eleanor: Exactly. >> They go hand in hand and the employee experience has to be outstanding to be able to delight those customers, to reduce churn, to increase revenue- >> Eleanor: Exactly. >> And for brand reputation. >> And it also, I think there is something as someone who is customer facing, when my coworkers and developers I work with build tools that enable me to do my job better and feel better about my own performance and my ability to impact the customer experience, it's just this incredibly virtuous cycle. >> So Retool.com is where folks can go to learn more and also try that subscription that you said was free for up to five users. >> Yes, exactly. >> All right. I guess my last question, well couple questions for you. What are some of the things that excited you that you heard from Adam Selipsky this morning? Anything from the keynote that stood out in terms of- >> Dave: Did you listen to the keynote? >> I did not. I had customer calls this morning. >> Okay, so they're bringing- >> East coast time, east coast time. >> One of the things that will excite you I think is they're connecting, making it easier to connect their databases. >> Eleanor: That would very much exciting. >> Aurora and Redshift, right? Okay. And they're making it easier to share data. I dunno if it goes across regions, but they're doing better integration. >> Amazing. >> Right? And you guys are integrating with those tools, right? Those data platforms. So that to me was a big thing for you guys. >> It is also and what a big thing Retool does is you can build a UI layer for your application on top of every single data source. And you hear, it's funny, you hear people talk about the 360 degree review of the customer so much. This is another, it's not our primary value proposition, but it is certainly another way to get there is if you have data from their desk tickets from in Redshift, you have data from Stripe, from their payments, you have data from Twilio from their text messages, you have data from DataDog where they're having your observability where you can notice analytics issues. You can actually just use Retool to build an app that sits on top of that so that you can give your support team, your sales team, your account management team, customer service team, all of the data that they need on their customers. And then you can build workflows so that you can do automated customer engagement reports. I did a Slack every week that shows what our top customers are doing with the product and that's built using all of our automation software as well. >> The integration is so important, as you just articulated, because every, you know, we say every company's a software company these days. Every company's a data company. But also, the data democratization that needs to happen to be able for lines of business so that data moves out of certain locked in functions and enables lines of business to use it. To get that visibility that you were just talking about is really going to be a competitive advantage for those that survive and thrive and grow in this market. >> It's able to, I think it's first it's visibility, but then it's action. And I think that's what Retool does very uniquely as well is it can take and unite the data from all the places, takes it out of the black box, puts it in front of the teams, and then enables them to act on it safely and securely. So not only can you see who might be fraudulent, you can flag them as fraud. Not only can you see who's actually in danger, you can click a button and send them an email and set up a meeting. You can set up an approval workflow to bring in an exec for engagement. You can update a password for someone in one place where you can see that they're having issues and not have to go somewhere else to update the password. So I think that's the key is that Retool can unlock the data visibility and then the action that you need to serve your customers. >> That's a great point. It's all about the actions, the insights that those actions can be acted upon. Last question for you. If you had a billboard that you could put any message that you want on Retool, what would it say? What's the big aha? This is why Retool is so great. >> I mean, I think the big thing about Retool is it's changing the economics of software development. It takes something that previously would've been below the line and that wouldn't get prioritized because it wasn't customer facing and makes it possible. And so I would say one of two billboards if I could be a little bit greedy, one would be Retool changed the economics of software development and one would be build operations software at the speed of thought. >> I love that. You're granted two billboards. >> Eleanor: Thank you. >> Those are both outstanding. Eleanor, it's been such a pleasure having you on the program. Thank you for talking to us about Retool. >> Eleanor: Thank you. >> Operations software and the massive impact that automating it can make for developers, businesses alike, all the way to the top line. We appreciate your insights. >> Thank you so much. >> For our guests and Dave Vellante, I'm Lisa Martin. You're watching theCUBE, the leader in live, emerging, and enterprise tech coverage. (gentle music)
SUMMARY :
Dave, this is going to be an A lot of people here, you exploration as for the security piece day of live content for you I love the tagline, build about the company you and that the same components probably in the moment as well, But you know, when you talk and sort of changing the And where are you having your customer and maximize the productivity and maximize the productivity How much of the code out there and focusing on what developers And do you guys have news at the show? and you build applications Explain the scourge of And software that you use as a consumer. that you guys provided is the way to do that. to the customer experience. And so when you think about it, so you obviously started in the cloud, into all the different AWS products And the way it worked is that but it's also the company I mentioned the tagline in the beginning, and in the background we'll be building it where you can sign up on And that's a platform and then you can build that you think really built to integrate with your and to triple their impact. Delighting the coworkers they're like this. and my ability to impact that you said was free that excited you that you heard I had customer calls this morning. One of the things that easier to share data. So that to me was a so that you can give your and enables lines of business to use it. and then the action that you any message that you want on is it's changing the economics I love that. Thank you for talking to us about Retool. and the massive impact that automating it and enterprise tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Eleanor | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Peter DeSantis | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Eleanor Dorfman | PERSON | 0.99+ |
dozens | QUANTITY | 0.99+ |
Coca-Cola | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
19 different tools | QUANTITY | 0.99+ |
Antarctic | LOCATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
two hour | QUANTITY | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Retool | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Twilio | ORGANIZATION | 0.99+ |
19 open tabs | QUANTITY | 0.99+ |
DataDog | ORGANIZATION | 0.98+ |
Retool | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Thanksgiving | EVENT | 0.98+ |
Redshift | TITLE | 0.98+ |
two co-founders | QUANTITY | 0.98+ |
seven different spreadsheets | QUANTITY | 0.98+ |
Stripe | ORGANIZATION | 0.98+ |
Jira | TITLE | 0.98+ |
last night | DATE | 0.97+ |
ORGANIZATION | 0.97+ | |
CRON | TITLE | 0.97+ |
over 500 million software apps | QUANTITY | 0.97+ |
2019 | DATE | 0.97+ |
Doordash | ORGANIZATION | 0.97+ |
first approach | QUANTITY | 0.96+ |
this morning | DATE | 0.96+ |
one application | QUANTITY | 0.96+ |
two billboards | QUANTITY | 0.96+ |
tons of thousands of people | QUANTITY | 0.95+ |
two new products | QUANTITY | 0.95+ |
first way | QUANTITY | 0.95+ |
DoorDash | ORGANIZATION | 0.94+ |
Gogo | ORGANIZATION | 0.94+ |
Reinvent | EVENT | 0.94+ |
Slack | TITLE | 0.93+ |
one place | QUANTITY | 0.93+ |
Anais Dotis Georgiou, InfluxData | Evolving InfluxDB into the Smart Data Platform
>>Okay, we're back. I'm Dave Valante with The Cube and you're watching Evolving Influx DB into the smart data platform made possible by influx data. Anna East Otis Georgio is here. She's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into realtime analytics. Anna is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IO X is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory, of course for speed. It's a kilo store, so it gives you compression efficiency, it's gonna give you faster query speeds, it gonna use store files and object storages. So you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOCs is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's lift tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import, super useful. Also, broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so a lot there. Now we talked to Brian about how you're using Rust and and which is not a new programming language and of course we had some drama around Russ during the pandemic with the Mozilla layoffs, but the formation of the Russ Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Rust was chosen because of his exceptional performance and rebi reliability. So while rust is synt tactically similar to c c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers and dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on card for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ, Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fixed race conditions to protect against buffering overflows and to ensure thread safe ay caching structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learned about the the new engine and the, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you're really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data and so much of the efficiency and performance of IOCs comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of illustrate why calmer data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then neighbor each other and when they neighbor each other in the storage format. This provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the min and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one times stamp and do that for every single row. So you're scanning across a ton more data and that's why row oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, calmer data fit framework. So that's where a lot of the advantages come >>From. Okay. So you've basically described like a traditional database, a row approach, but I've seen like a lot of traditional databases say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native it, is it not as effective as the, is the form not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. >>Yeah. Got it. So let's talk about Arrow data fusion. What is data fusion? I know it's written in rust, but what does it bring to to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as its in memory format. So the way that it helps influx DB IOx is that okay, it's great if you can write unlimited amount of cardinality into influx cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PDA's data frames as well and all of the machine learning tools associated with pandas. >>Okay. You're also leveraging par K in the platform course. We heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Par K and why is it important? >>Sure. So Par K is the calm oriented durable file format. So it's important because it'll enable bulk import and bulk export. It has compatibility with Python and pandas so it supports a broader ecosystem. Parque files also take very little disc disc space and they're faster to scan because again they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and these, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call it the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOCs and I really encourage if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and I just wanna learn more, then I would encourage you to go to the monthly tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel. Look for the influx D DB underscore IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about IOCs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how influx TB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and you guys super responsive, so really appreciate that. All right, thank you so much and East for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yokum. He's the director of engineering for Influx Data and we're gonna talk about how you update a SaaS engine while the plane is flying at 30,000 feet. You don't wanna miss this.
SUMMARY :
to increase the granularity of time series analysis analysis and bring the world of data Hi, thank you so much. So you got very cost effective approach. it aims to have no limits on cardinality and also allow you to write any kind of event data that So lots of platforms, lots of adoption with rust, but why rust as an all the fine grain control, you need to take advantage of even to even today you do a lot of garbage collection in these, in these systems and And so you can picture this table where we have like two rows with the two temperature values for order to answer that question and you have those immediately available to you. to pluck out that one temperature value that you want at that one times stamp and do that for every about is really, you know, kind of native it, is it not as effective as the, Yeah, it's, it's not as effective because you have more expensive compression and because So let's talk about Arrow data fusion. It also has a PANDAS API so that you could take advantage of What are you doing with So it's important What's the value that you're bringing to the community? here is that the more you contribute and build those up, then the kind of summarize, you know, where what, what the big takeaways are from your perspective. So if there's a particular technology or stack that you wanna dive deeper into and want and you guys super responsive, so really appreciate that. I really appreciate it. Influx Data and we're gonna talk about how you update a SaaS engine while
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Yokum | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Anna | PERSON | 0.99+ |
James Bellenger | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
16 times | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Python | TITLE | 0.99+ |
mobile.twitter.com | OTHER | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
30,000 feet | QUANTITY | 0.99+ |
Russ Foundation | ORGANIZATION | 0.99+ |
Scala | TITLE | 0.99+ |
Twitter Lite | TITLE | 0.99+ |
two rows | QUANTITY | 0.99+ |
200 megabyte | QUANTITY | 0.99+ |
Node | TITLE | 0.99+ |
Three months ago | DATE | 0.99+ |
one application | QUANTITY | 0.99+ |
both places | QUANTITY | 0.99+ |
each row | QUANTITY | 0.99+ |
Par K | TITLE | 0.99+ |
Anais Dotis Georgiou | PERSON | 0.99+ |
one language | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
15 engineers | QUANTITY | 0.98+ |
Anna East Otis Georgio | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
one second | QUANTITY | 0.98+ |
25 engineers | QUANTITY | 0.98+ |
About 800 people | QUANTITY | 0.98+ |
sql | TITLE | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
two temperature values | QUANTITY | 0.98+ |
one times | QUANTITY | 0.98+ |
c plus plus | TITLE | 0.97+ |
Rust | TITLE | 0.96+ |
SQL | TITLE | 0.96+ |
today | DATE | 0.96+ |
Influx | ORGANIZATION | 0.95+ |
under 600 kilobytes | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
c plus plus | TITLE | 0.95+ |
Apache | ORGANIZATION | 0.95+ |
par K | TITLE | 0.94+ |
React | TITLE | 0.94+ |
Russ | ORGANIZATION | 0.94+ |
About three months ago | DATE | 0.93+ |
8:30 AM Pacific time | DATE | 0.93+ |
twitter.com | OTHER | 0.93+ |
last decade | DATE | 0.93+ |
Node | ORGANIZATION | 0.92+ |
Hadoop | TITLE | 0.9+ |
InfluxData | ORGANIZATION | 0.89+ |
c c plus plus | TITLE | 0.89+ |
Cube | ORGANIZATION | 0.89+ |
each column | QUANTITY | 0.88+ |
InfluxDB | TITLE | 0.86+ |
Influx DB | TITLE | 0.86+ |
Mozilla | ORGANIZATION | 0.86+ |
DB IOx | TITLE | 0.85+ |
Evolving InfluxDB into the Smart Data Platform
>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SUMMARY :
we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
David Brown | PERSON | 0.99+ |
Tim Yoakum | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tim Yokum | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Herain Oberoi | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Kamile Taouk | PERSON | 0.99+ |
John Fourier | PERSON | 0.99+ |
Rinesh Patel | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Santana Dasgupta | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ICE | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jack Berkowitz | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Venkat | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Camille | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Venkat Krishnamachari | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Don Tapscott | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Intercontinental Exchange | ORGANIZATION | 0.99+ |
Children's Cancer Institute | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Sabrina Yan | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
Sabrina | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
MontyCloud | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Leo | PERSON | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Santa Ana | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
Tushar | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Valente | PERSON | 0.99+ |
JL Valente | PERSON | 0.99+ |
1,000 | QUANTITY | 0.99+ |
Daniel Rethmeier & Samir Kadoo | Accelerating Business Transformation
(upbeat music) >> Hi everyone. Welcome to theCUBE special presentation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got two great guests, one for calling in from Germany, or videoing in from Germany, one from Maryland. We've got VMware and AWS. This is the customer successes with VMware Cloud on AWS Showcase: Accelerating Business Transformation. Here in the Showcase at Samir Kadoo, worldwide VMware strategic alliance solution architect leader with AWS. Samir, great to have you. And Daniel Rethmeier, principal architect global AWS synergy at VMware. Guys, you guys are working together, you're the key players in this relationship as it rolls out and continues to grow. So welcome to theCUBE. >> Thank you, greatly appreciate it. >> Great to have you guys both on. As you know, we've been covering this since 2016 when Pat Gelsinger, then CEO, and then then CEO AWS at Andy Jassy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success of VM workloads in the cloud. VMware's had great success with it since and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later, we got this whole inflection point coming, you're starting to see this idea of higher level services, more performance are coming in at the infrastructure side, more automation, more serverless, I mean and AI. I mean, it's just getting better and better every year in the cloud. Kind of a whole 'nother level. Where are we? Samir, let's start with you on the relationship. >> Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced. And then less than a year later, that's when we officially launched VMware Cloud on AWS. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware. Day in, day out, as far as advancing VMware Cloud on AWS. You know, even if you look at the innovation that takes place with the solution, things have modernized, things have changed, there's been advancements. You know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right, more recently. One of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware Cloud on AWS. And even with VMware to other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware Cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware Cloud on AWS service competency. So think about it from the standpoint, there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >> Great stuff. Daniel, I want to get to you in a second upon this principal architect position you have. In your title, you're the global AWS synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly VMworld, talking about how the workloads on IT has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AIOps, you got ITOps changing a lot, you got a lot more automation, edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the relationship? >> So at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware Cloud and AWS, we are also enabling us mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembles globally and also virtually on Slack and the usual suspect tools working together and listening to customers. That's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the best benefits out of VMware Cloud on AWS. And over the time, we really have involved the solution. As Samir mentioned, we just added additional storage solutions to VMware Cloud on AWS. We now have three different instance types that cover a broad range of workloads. So for example, we just edited the I4i host, which is ideally for workloads that require a lot of CPU power, such as, you mentioned it, AI workloads. >> Yeah, so I want to get us just specifically on the customer journey and their transformation, you know, we've been reporting on Silicon angle in theCUBE in the past couple weeks in a big way that the ops teams are now the new devs, right? I mean that sounds a little bit weird, but IT operations is now part of a lot more DataOps, security, writing code, composing. You know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing, what are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >> That's a great point, because originally, VMware and AWS came from very different directions when it comes to speaking people and customers. So for example, AWS, very developer focused, whereas VMware has a very great footprint in the ITOps area. And usually these are very different teams, groups, different cultures, but it's getting together. However, we always try to address the customer needs, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, "Well, we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service. Recoverability as a service, scalability as a service. We want to have this from the infrastructure." That was one of the unique selling points for VMware on-premise and now we are bringing this into the cloud. >> Samir, talk about your perspective. I want to get your thoughts, and not to take a tangent, but we had covered the AWS re:MARS, actually it was Amazon re:MARS, machine learning automation, robotics and space was really kind of the confluence of industrial IoT, software, physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code, automation, you know, "Hey Alexa, deploy a Kubernetes cluster." Yeah, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services, meets workloads. Can you unpack that and share your opinion on what you see there from an Amazon perspective and how it relates to this? >> Yeah. Yeah, totally, right? And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware Cloud on AWS, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you want to leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's going to give you that power to do certain things, such as, for example, like how you mentioned with IoT, even with utilizing Alexa, or if there's any other service that you want to utilize, that's the joining point between both of the offerings right off the top. Though with digital transformation, right, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology even in your business. Leaders are looking to reinvent their business, they're looking to take different steps as far as pursuing a new strategy, maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. >> Okay. >> Then also- >> Oh, go ahead, finish your thought. >> No, no, no, I was going to say what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that vStor admin that's used to their on-premises environment. Now with VMware Cloud on AWS, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, you still have that methodology where you can utilize that in VMware Cloud on AWS too. >> Daniel, I want to get your thoughts on this because at Explore and after the event, as we prep for CubeCon and re:Invent coming up, the big AWS show, I had a couple conversations with a lot of the VMware customers and operators, and it's like hundreds of thousands of users and millions of people talking about and peaked on VMware, interested in VMware. The common thread was one person said, "I'm trying to figure out where I'm going to put my career in the next 10 to 15 years." And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm going to be the next cloud, but there's no like role yet. Architects, is it solution architect, SRE? So you're starting to see the psychology of the operators who now are going to try to make these career decisions. Like what am I going to work on? And then it's kind of fuzzy, but I want to get your thoughts, how would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity? And what's going to happen? >> So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills and trainings? Is everything worthless I learned over the last 15 years of my career? And the answer is to make digital transformation a success, we need not just to talk about technology, but also about process, people, and culture. And this is where VMware really can help because if you are applying VMware Cloud on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment, you can use the same managing and monitoring tools, if you have written, and many customers did this, if you have developed hundreds of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware Cloud on AWS. And that gives not just leaders, but also the architects at customers, the operators at customers, the confidence in such a complex project. >> The consistency, very key point, gives them the confidence to go. And then now that once they're confident, they can start committing themselves to new things. Samir, you're reacting to this because on your side, you've got higher level services, you've got more performance at the hardware level. I mean, a lot improvements. So, okay, nothing's changed, I can still run my job, now I got goodness on the other side. What's the upside? What's in it for the customer there? >> Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware Cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud. But if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you want to utilize any other AWS service in conjunction with that VM that resides maybe on-premises or even in VMware Cloud on AWS, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you want to expand on the skills, you certainly have that capability to do so. >> Great stuff, I love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, 'cause people want to know what's goes on behind the scenes. How does innovation get happen? How does it happen with the relationships? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? Do you guys just have a Zoom meeting, do you guys fly out, you write code, go do you ship things? I mean, I'm making it up, but you get the idea. How does it work? What's going on behind the scenes? >> So we hope to get more frequently together in-person, but of course we had some difficulties over the last two to three years. So we are very used to Zoom conferences and Slack meetings. You always have to have the time difference in mind if you are working globally together. But what we try, for example, we have regular assembles now also in-person, geo-based, so for AMEA, for the Americas, for APJ. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >> What's interesting, you know, as events are coming back, Samir, before you weigh in this, I'll comment as theCUBE's been going back out to events, we're hearing comments like, "What pandemic? We were more productive in the pandemic." I mean, developers know how to work remotely and they've been on all the tools there, but then they get in-person, they're happy to see people, but no one's really missed the beat. I mean, it seems to be very productive, you know, workflow, not a lot of disruption. More, if anything, productivity gains. >> Agreed, right? I think one of the key things to keep in mind is even if you look at AWS's, and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said and meant earlier, right? We might have meetings at different time zones, maybe it's in-person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation in VMware Cloud on AWS as well. But one of the key things to keep in mind is yes, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology, we've been able to still communicate, work with our customers, even with VMware in between, with AWS and whatnot, we had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware Cloud on AWS Outposts, that was something that customers have been asking for. We've been able to leverage the feedback and then continue to drive innovation even around VMware Cloud on AWS Outposts. So even with the on-premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >> In our last segment we did here on this Showcase, we listed the accomplishments and they were pretty significant. I mean geo, you got the global rollouts of the relationship. It's just really been interesting and people can reference that, we won't get into it here. But I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Because again, I think right now, we're at an inflection point more than ever. What can people expect from the relationship and what's coming up with re:Invent? Can you share a little bit of kind of what's coming down the pike? >> So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked for over the last years. Whenever you are requiring additional storage to host your virtual machines, you usually in VMware Cloud on AWS, you have to add additional nodes. Now we have three different node types with different ratios of compute, storage, and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay for it. And now with two solutions which offer choice for the customers, like FS6 wanted a ONTAP and VMware Cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements, at the upcoming events. >> Samir, what's your reaction take on what's coming down on your side? >> Yeah, I think one of the key things to keep in mind is we're looking to help our customers be agile and even scaled with their needs, right? So with VMware Cloud on AWS, that's one of the key things that comes to mind, right? There are going to be announcements, innovations, and whatnot with upcoming events. But together, we're able to leverage that to advance VMware cloud on AWS. To Daniel's point, storage for example, even with host offerings. And then even with decoupling storage from compute and memory, right? Now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware Cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's going to be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events, that's going to give us the options to even advance our own services together. >> Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I want to get both of your reaction to it. And we've been bringing this up in the open conversations on theCUBE is in the old days, let's go back this generation, you had ecosystems, you had VMware had an ecosystem, AWS had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships, and they do business together and they sell each other's products or do some stuff. Now it's more about architecture, 'cause we're now in a distributed large scale environment where the role of ecosystems are intertwining and you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides, they come together. So you have this now almost a three dimensional or multidimensional ecosystem interplay. What's your thoughts on this? Because it's about the architecture, integration is a value, not so much innovations only. You got to do innovation, but when you do innovation, you got to integrate it, you got to connect it. So how do you guys see this as an architectural thing, start to see more technical business deals? >> So we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even closer to specific vendors. We are removing these obstacles. So with VMware Cloud on AWS, moving to the cloud, firstly it's not a dead end. If you decide at one point in time because of latency requirements or maybe some compliance requirements, you need to move back into on-premise, you can do this. If you decide you want to stay with some of your services on-premise and just run a couple of dedicated services in the cloud, you can do this and you can man manage it through a single pane of glass. That's quite important. So cloud is no longer a dead end, it's no longer a binary decision, whether it's on-premise or the cloud, it is the cloud. And the second thing is you can choose the best of both worlds, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware Cloud on AWS either way in a very, very fast cost effective and safe way, then you can enrich, later on enrich these virtual machines with services that are offered by AWS, more than 200 different services ranging from object-based storage, load balancing, and so on. So it's an endless, endless possibility. >> We call that super cloud in the way that we generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is kind of where cloud is right now. You guys are not commodity, amazon's completely differentiating, but there's some commodity things happen. You got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. >> Absolutely. >> And everybody wins. >> Yeah, I 100% agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it's a cross education where there might be someone who's more proficient on the cloud side with AWS, maybe more proficient with the VMware's technology. But then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud, maybe I don't know what the networking constructs are, maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware Cloud on AWS. Maybe you want to leverage any of the native AWS services or even just off the top, 200 plus AWS services, right? But it comes down to that skillset, right? So again, solutions architecture at the back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the day. >> I mean, I just think it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean you don't have to do anything. You still run it. Just spear the way you're working on it and now do new things. This is kind of a cultural shift. >> Yeah, absolutely. And if you look, not every customer, not every organization has the resources to refactor and re-platform everything. And we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time, they can free up resources to develop new innovations and grow their business. >> Awesome. Samir, thank you for coming on. Daniel, thank you for coming to Germany. >> Thank you. Oktoberfest, I know it's evening over there, weekend's here. And thank you for spending the time. Samir, give you the final word. AWS re:Invent's coming up. We're preparing, we're going to have an exclusive with Adam, with Fryer, we'd do a curtain raise, and do a little preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at re:Invent this year? The big show? >> Yeah, so I think Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what are called chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking to sit and listen to a session, yes that's there, but if they want to be hands-on, that is also there too. So personally for me as an IT background, been in sysadmin world and whatnot, being hands-on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. >> Yeah, and re:Invent's an amazing show for the in-person. You guys nail it every year. We'll have three sets this year at theCUBE and it's becoming popular. We have more and more content. You guys got live streams going on, a lot of content, a lot of media. So thanks for sharing that. Samir, Daniel, thank you for coming on on this part of the Showcase episode of really the customer successes with VMware Cloud on AWS, really accelerating business transformation with AWS and VMware. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)
SUMMARY :
This is the customer successes Great to have you guys both on. things to keep in mind, right? One of the things to keep in mind Daniel, I want to get to you in a second And over the time, we really that the ops teams are in the ITOps area. And so when you look at So that's going to give you even with logging, you in the next 10 to 15 years." And the answer is to make What's in it for the customer there? and that ability to just I'd love to have you guys explain, and to contribute to our community. but no one's really missed the beat. So the key thing is always to maintain But I will ask you guys to comment on, and memory and you have to pay for it. So it comes down to, you know, and you guys are in the is you can choose the best with you on their terms. on the cloud side with AWS, I mean you don't have to do anything. has the resources to refactor Samir, thank you for coming on. And thank you for spending the time. that's one of the key things of really the customer successes
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daniel | PERSON | 0.99+ |
Samir | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Daniel Rethmeier | PERSON | 0.99+ |
Maryland | LOCATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Samir Kadoo | PERSON | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Adam | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
21 regions | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
two solutions | QUANTITY | 0.99+ |
Accelerating Business Transformation with VMware Cloud on AWS 10 31
>>Hi everyone. Welcome to the Cube special presentation here in Palo Alto, California. I'm John Foer, host of the Cube. We've got two great guests, one for calling in from Germany, our videoing in from Germany, one from Maryland. We've got VMware and aws. This is the customer successes with VMware cloud on AWS showcase, accelerating business transformation here in the showcase with Samir Candu Worldwide. VMware strategic alliance solution, architect leader with AWS Samir. Great to have you and Daniel Re Myer, principal architect global AWS synergy at VMware. Guys, you guys are, are working together. You're the key players in the re relationship as it rolls out and continues to grow. So welcome to the cube. >>Thank you. Greatly appreciate it. >>Great to have you guys both on, As you know, we've been covering this since 2016 when Pat Geling, then CEO and then then CEO AWS at Andy Chasy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success. OFM workloads in the cloud. VMware's had great success with it since, and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later we got this whole inflection point coming. You're starting to see, you know, this idea of higher level services, more performance are coming in at the infrastructure side. More automation, more serverless, I mean, and a, I mean it's just getting better and better every year in the cloud. Kinda a whole nother level. Where are we, Samir? Let's start with you on, on the relationship. >>Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced, and then less than a year later, that's when we officially launched VMware cloud on aws. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware day in, day out. As far as advancing VMware cloud on aws. You know, even if you look at the innovation that takes place with a solution, things have modernized, things have changed, there's been advancements, you know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right? More recently, one of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. >>And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware cloud on aws, and even with VMware's, other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware cloud on AWS service competency. So think about it from the standpoint there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >>Great stuff. Daniel, I wanna get to you in a second. Upon this principal architect position you have in your title, you're the global a synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly world, talking about how the, the workloads on it has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AI ops, you got it. Ops changing a lot, you got a lot more automation edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the >>Relationship? So at at, at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware cloud on aws. We are also enabling US mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembled globally and also virtually on Slack and the usual suspect tools working together and listening to customers, that's, that's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the, the best benefits out of VMware cloud on aws. And over the time we, we really have involved the solution. As Samia mentioned, we just added additional storage solutions to VMware cloud on aws. We now have three different instance types that cover a broad range of, of workload. So for example, we just added the I four I host, which is ideally for workloads that require a lot of CPU power, such as you mentioned it, AI workloads. >>Yeah. So I wanna guess just specifically on the customer journey and their transformation. You know, we've been reporting on Silicon angle in the queue in the past couple weeks in a big way that the OPS teams are now the new devs, right? I mean that sounds OP a little bit weird, but operation IT operations is now part of the, a lot more data ops, security writing code composing, you know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing? What are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >>That, that's a great point because originally VMware and AWS came from very different directions when it comes to speaking people at customers. So for example, aws very developer focused, whereas VMware has a very great footprint in the IT ops area. And usually these are very different, very different teams, groups, different cultures, but it's, it's getting together. However, we always try to address the customers, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, well we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service, recoverability as a service, scalability as a service. We want to have this from the infrastructure. That was one of the unique selling points for VMware on premise and now we are bringing this into the cloud. >>Samir, talk about your perspective. I wanna get your thoughts, and not to take a tangent, but we had covered the AWS remar of, actually it was Amazon res machine learning automation, robotics and space. It was really kinda the confluence of industrial IOT software physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code automation, you know, Hey Alexa, deploy a Kubernetes cluster. Yeah, I mean, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services meets workloads. Can you unpack that and share your opinion on, on what you see there from an Amazon perspective and how it relates to this? >>Yeah, totally. Right. And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware cloud on aws, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you wanna leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's gonna give you that power to do certain things, such as, for example, like how you mentioned with iot, even with utilizing Alexa or if there's any other service that you wanna utilize, that's the joining point between both of the offerings. Right off the top though, with digital transformation, right? You, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology. Even in your business leaders are looking to reinvent their business. They're looking to take different steps as far as pursuing a new strategy. Maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. Okay. Then also, Oh, >>Go ahead, finish >>Your thought. No, no, I was gonna say, what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that VS four admin that's used to their on-premises at environment. Now with VMware cloud on aws, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, yeah. You still have that methodology where you can utilize that in VMware cloud on AWS two. >>Danielle, I wanna get your thoughts on this because at at explore and, and, and after the event, now as we prep for Cuban and reinvent coming up the big AWS show, I had a couple conversations with a lot of the VMware customers and operators and it's like hundreds of thousands of, of, of, of users and millions of people talking about and and peaked on VM we're interested in v VMware. The common thread was one's one, one person said, I'm trying to figure out where I'm gonna put my career in the next 10 to 15 years. And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm gonna be the next cloud, but there's no like role yet architects, is it Solution architect sre. So you're starting to see the psychology of the operators who now are gonna try to make these career decisions, like how, what am I gonna work on? And it's, and that was kind of fuzzy, but I wanna get your thoughts. How would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity and what's gonna happen? >>So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means in, in to to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills? And, and trainings is everything worthless I learned over the last 15 years of my career? And the, the answer is to make digital transformation a success. We need not just to talk about technology, but also about process people and culture. And this is where VMware really can help because if you are applying VMware cloud on a, on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment. You can use the same managing and monitoring tools. If you have written, and many customers did this, if you have developed hundreds of, of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware cloud on aws. And that gives not just leaders, but but also the architects at customers, the operators at customers, the confidence in, in such a complex project, >>The consistency, very key point, gives them the confidence to go and, and then now that once they're confident they can start committing themselves to new things. Samir, you're reacting to this because you know, on your side you've got higher level services, you got more performance at the hardware level. I mean, lot improvement. So, okay, nothing's changed. I can still run my job now I got goodness on the other side. What's the upside? What's in it for the, for the, for the customer there? >>Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud, but if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you wanna utilize any other AWS service in conjunction with that VM that resides maybe on premises or even in VMware cloud on aws, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you wanna expand on the skills, you certainly have that capability to do so. >>Great stuff. I love, love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, cuz people wanna know what's goes on in behind the scenes. How does innovation get happen? How does it happen with the relationship? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? You guys just have a zoom meeting, Do you guys fly out, you write go do you ship thing? I mean I'm making it up, but you get the idea, what's the, what's, how does it work? What's going on behind the scenes? >>So we hope to get more frequently together in person, but of course we had some difficulties over the last two to three years. So we are very used to zoom conferences and and Slack meetings. You always have to have the time difference in mind if we are working globally together. But what we try, for example, we have reg regular assembled now also in person geo based. So for emia, for the Americas, for aj. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >>What's interesting, you know, as, as events are coming back to here, before you get, you weigh in, I'll comment, as the cube's been going back out to events, we are hearing comments like what, what pandemic we were more productive in the pandemic. I mean, developers know how to work remotely and they've been on all the tools there, but then they get in person, they're happy to see people, but there's no one's, no one's really missed the beat. I mean it seems to be very productive, you know, workflow, not a lot of disruption. More if anything, productivity gains. >>Agreed, right? I think one of the key things to keep in mind is, you know, even if you look at AWS's and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said met earlier, right? We might have meetings at different time zones, maybe it's in person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation and VMware cloud on AWS as well. But one of the key things to keep in mind is yes, there have been, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology we've been able to still communicate work with our customers. Even with VMware in between, with AWS and whatnot. We had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware cloud on AWS outposts, that was something that customers have been asking for. We've been been able to leverage the feedback and then continue to drive innovation even around VMware cloud on AWS outposts. So even with the on premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >>And our last segment we did here on the, on this showcase, we listed the accomplishments and they were pretty significant. I mean go, you got the global rollouts of the relationship. It's just really been interesting and, and people can reference that. We won't get into it here, but I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Cuz again, I think right now we're in at a, an inflection point more than ever. What can people expect from the relationship and what's coming up with reinvent? Can you share a little bit of kind of what's coming down the pike? >>So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked us for over the last years. Whenever, whenever you are requiring additional storage to host your virtual machines, you usually in VMware cloud on aws, you have to add additional notes. Now we have three different note types with different ratios of compute, storage and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay. And now with two solutions which offer choice for the customers, like FS six one, NetApp onap, and VMware cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements at the upcoming events. >>Samir, what's your, what's your reaction take on the, on what's coming down on your side? >>Yeah, I think one of the key things to keep in mind is, you know, we're looking to help our customers be agile and even scale with their needs, right? So with VMware cloud on aws, that's one of the key things that comes to mind, right? There are gonna be announcements, innovations and whatnot with outcoming events. But together we're able to leverage that to advance VMware cloud on AWS to Daniel's point storage, for example, even with host offerings. And then even with decoupling storage from compute and memory, right now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's gonna be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events that's gonna give us the options to even advance our own services together. >>Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I wanna get both of your reaction to it. And we've been bringing this up in, in the open conversations on the cube is in the old days it was going back this generation, you had ecosystems, you had VMware had an ecosystem they did best, had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships and they do business together and they, they sell to each other's products or do some stuff. Now it's more about architecture cuz we're now in a distributed large scale environment where the role of ecosystems are intertwining. >>And this, you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides. They come together. So you have this now almost a three dimensional or multidimensional ecosystem, you know, interplay. What's your thoughts on this? And, and, and because it's about the architecture, integration is a value, not so much. Innovation is only, you gotta do innovation, but when you do innovation, you gotta integrate it, you gotta connect it. So what is, how do you guys see this as a, as an architectural thing, start to see more technical business deals? >>So we are, we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even, even closer to specific vendors. We are removing these obstacles. So with VMware cloud on aws moving to the cloud, firstly it's, it's not a dead end. If you decide at one point in time because of latency requirements or maybe it's some compliance requirements, you need to move back into on-premise. You can do this if you decide you want to stay with some of your services on premise and just run a couple of dedicated services in the cloud, you can do this and you can mana manage it through a single pane of glass. That's quite important. So cloud is no longer a dead and it's no longer a binary decision, whether it's on premise or the cloud. It it is the cloud. And the second thing is you can choose the best of both works, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware cloud on aws, by the way, in a very, very fast cost effective and safe way, then you can enrich later on enrich these virtual machines with services that are offered by aws. More than 200 different services ranging from object based storage, load balancing and so on. So it's an endless, endless possibility. >>We, we call that super cloud in, in a, in a way that we be generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is gonna where cloud is right now, you guys are, are not commodity. Amazon's completely differentiating, but there's some commodity things. Having got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. Absolutely. And everybody wins. >>Yeah. And a hundred percent agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it it, it's a cross education where there might be someone who's more proficient on the cloud side with aws, maybe more proficient with the viewers technology, but then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud. Maybe I don't know what the networking constructs are. Maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware cloud on aws. Maybe you wanna leverage any of the native AWS services or even just off the top 200 plus AWS services, right? But it comes down to that skill, right? So again, solutions architecture at the back of, back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the >>Day. I mean, I just think it's, it's a, it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean, you don't have to do anything. You still run the fear, the way you working on it and now do new things. This is kind of a cultural shift. >>Yeah, absolutely. And if, if you look, not every, not every customer, not every organization has the resources to refactor and re-platform everything. And we gave, we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time they can free up resources to develop new innovations and, and grow their business. >>Awesome. Samir, thank you for coming on. Danielle, thank you for coming to Germany, Octoberfest, I know it's evening over there, your weekend's here. And thank you for spending the time. Samir final give you the final word, AWS reinvents coming up. Preparing. We're gonna have an exclusive with Adam, but Fry, we do a curtain raise, a dual preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at reinvent this year? The big show? >>Yeah, so I think, you know, Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what I call a chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking for to sit and listen to a session, yes that's there. But if they wanna be hands on, that is also there too. So personally for me as an IT background, you know, been in CIS admin world and whatnot, being hands on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. Yeah, >>Reinvents an amazing show for the in person. You guys nail it every year. We'll have three sets this year at the cube. It's becoming popular. We more and more content. You guys got live streams going on, a lot of content, a lot of media, so thanks, thanks for sharing that. Samir Daniel, thank you for coming on on this part of the showcase episode of really the customer successes with VMware Cloud Ons, really accelerating business transformation withs and VMware. I'm John Fur with the cube, thanks for watching. Hello everyone. Welcome to this cube showcase, accelerating business transformation with VMware cloud on it's a solution innovation conversation with two great guests, Fred and VP of commercial services at aws and NA Ryan Bard, who's the VP and general manager of cloud solutions at VMware. Gentlemen, thanks for joining me on this showcase. >>Great to be here. >>Hey, thanks for having us on. It's a great topic. You know, we, we've been covering this VMware cloud on abus since, since the launch going back and it's been amazing to watch the evolution from people saying, Oh, it's the worst thing I've ever seen. It's what's this mean? And depress work were, we're kind of not really on board with kind of the vision, but as it played out as you guys had announced together, it did work out great for VMware. It did work out great for a D and it continues two years later and I want just get an update from you guys on where you guys see this has been going. I'll see multiple years. Where is the evolution of the solution as we are right now coming off VMware explorer just recently and going in to reinvent, which is only a couple weeks away, feels like tomorrow. But you know, as we prepare a lot going on, where are we with the evolution of the solution? >>I mean, first thing I wanna say is, you know, PBO 2016 was a someon moment and the history of it, right? When Pat Gelsinger and Andy Jessey came together to announce this and I think John, you were there at the time I was there, it was a great, great moment. We launched the solution in 2017, the year after that at VM Word back when we called it Word, I think we have gone from strength to strength. One of the things that has really mattered to us is we have learned froms also in the processes, this notion of working backwards. So we really, really focused on customer feedback as we build a service offering now five years old, pretty remarkable journey. You know, in the first years we tried to get across all the regions, you know, that was a big focus because there was so much demand for it. >>In the second year we started going really on enterprise grade features. We invented this pretty awesome feature called Stretch clusters, where you could stretch a vSphere cluster using VSA and NSX across two AZs in the same region. Pretty phenomenal four nine s availability that applications start started to get with that particular feature. And we kept moving forward all kinds of integration with AWS direct connect transit gateways with our own advanced networking capabilities. You know, along the way, disaster recovery, we punched out two, two new services just focused on that. And then more recently we launched our outposts partnership. We were up on stage at Reinvent, again with Pat Andy announcing AWS outposts and the VMware flavor of that VMware cloud and AWS outposts. I think it's been significant growth in our federal sector as well with our federal and high certification more recently. So all in all, we are super excited. We're five years old. The customer momentum is really, really strong and we are scaling the service massively across all geos and industries. >>That's great, great update. And I think one of the things that you mentioned was how the advantages you guys got from that relationship. And, and this has kind of been the theme for AWS since I can remember from day one. Fred, you guys do the heavy lifting as as, as you always say for the customers here, VMware comes on board, takes advantage of the AWS and kind of just doesn't miss a beat, continues to move their workloads that everyone's using, you know, vSphere and these are, these are big workloads on aws. What's the AWS perspective on this? How do you see it? >>Yeah, it's pretty fascinating to watch how fast customers can actually transform and move when you take the, the skill set that they're familiar with and the advanced capabilities that they've been using on Preem and then overlay it on top of the AWS infrastructure that's, that's evolving quickly and, and building out new hardware and new instances we'll talk about. But that combined experience between both of us on a jointly engineered solution to bring the best security and the best features that really matter for those workloads drive a lot of efficiency and speed for the, for the customer. So it's been well received and the partnership is stronger than ever from an engineering standpoint, from a business standpoint. And obviously it's been very interesting to look at just how we stay day one in terms of looking at new features and work and, and responding to what customers want. So pretty, pretty excited about just seeing the transformation and the speed that which customers can move to bmc. Yeah, >>That's what great value publish. We've been talking about that in context too. Anyone building on top of the cloud, they can have their own supercloud as we call it. If you take advantage of all the CapEx and and investment Amazon's made and AWS has made and, and and continues to make in performance IAS and pass all great stuff. I have to ask you guys both as you guys see this going to the next level, what are some of the differentiations you see around the service compared to other options on the market? What makes it different? What's the combination? You mentioned jointly engineered, what are some of the key differentiators of the service compared to others? >>Yeah, I think one of the key things Fred talked about is this jointly engineered notion right from day one. We were the earlier doctors of AWS Nitro platform, right? The reinvention of E two back five years ago. And so we have been, you know, having a very, very strong engineering partnership at that level. I think from a VMware customer standpoint, you get the full software defined data center or compute storage networking on EC two, bare metal across all regions. You can scale that elastically up and down. It's pretty phenomenal just having that consistency globally, right on aws EC two global regions. Now the other thing that's a real differentiator for us that customers tell us about is this whole notion of a managed service, right? And this was somewhat new to VMware, but we took away the pain of this undifferentiated heavy lifting where customers had to provision rack, stack hardware, configure the software on top, and then upgrade the software and the security batches on top. >>So we took, took away all of that pain as customers transitioned to VMware cloud and aws. In fact, my favorite story from last year when we were all going through the lock for j debacle industry was just going through that, right? Favorite proof point from customers was before they put even race this issue to us, we sent them a notification saying we already patched all of your systems, no action from you. The customers were super thrilled. I mean these are large banks, many other customers around the world, super thrilled they had to take no action, but a pretty incredible industry challenge that we were all facing. >>Nora, that's a great, so that's a great point. You know, the whole managed service piece brings up the security, you kind of teasing at it, but you know, there's always vulnerabilities that emerge when you are doing complex logic. And as you grow your solutions, there's more bits. You know, Fred, we were commenting before we came on camera, there's more bits than ever before and, and at at the physics layer too, as well as the software. So you never know when there's gonna be a zero day vulnerability out there. Just, it happens. We saw one with fornet this week, this came outta the woodwork. But moving fast on those patches, it's huge. This brings up the whole support angle. I wanted to ask you about how you guys are doing that as well, because to me we see the value when we, when we talk to customers on the cube about this, you know, it was a real, real easy understanding of how, what the cloud means to them with VMware now with the aws. But the question that comes up that we wanna get more clarity on is how do you guys handle support together? >>Well, what's interesting about this is that it's, it's done mutually. We have dedicated support teams on both sides that work together pretty seamlessly to make sure that whether there's a issue at any layer, including all the way up into the app layer, as you think about some of the other workloads like sap, we'll go end to end and make sure that we support the customer regardless of where the particular issue might be for them. And on top of that, we look at where, where we're improving reliability in, in as a first order of, of principle between both companies. So from an availability and reliability standpoint, it's, it's top of mind and no matter where the particular item might land, we're gonna go help the customer resolve. That works really well >>On the VMware side. What's been the feedback there? What's the, what are some of the updates? >>Yeah, I think, look, I mean, VMware owns and operates the service, but we have a phenomenal backend relationship with aws. Customers call VMware for the service for any issues and, and then we have a awesome relationship with AWS on the backend for support issues or any hardware issues. The BASKE management that we jointly do, right? All of the hard problems that customers don't have to worry about. I think on the front end, we also have a really good group of solution architects across the companies that help to really explain the solution. Do complex things like cloud migration, which is much, much easier with VMware cloud aws, you know, we are presenting that easy button to the public cloud in many ways. And so we have a whole technical audience across the two companies that are working with customers every single day. >>You know, you had mentioned, I've got a list here, some of the innovations the, you mentioned the stretch clustering, you know, getting the GOs working, Advanced network, disaster recovery, you know, fed, Fed ramp, public sector certifications, outposts, all good. You guys are checking the boxes every year. You got a good, good accomplishments list there on the VMware AWS side here in this relationship. The question that I'm interested in is what's next? What recent innovations are you doing? Are you making investments in what's on the lists this year? What items will be next year? How do you see the, the new things, the list of accomplishments, people wanna know what's next. They don't wanna see stagnant growth here, they wanna see more action, you know, as as cloud kind of continues to scale and modern applications cloud native, you're seeing more and more containers, more and more, you know, more CF C I C D pipe pipelining with with modern apps, put more pressure on the system. What's new, what's the new innovations? >>Absolutely. And I think as a five yearold service offering innovation is top of mind for us every single day. So just to call out a few recent innovations that we announced in San Francisco at VMware Explorer. First of all, our new platform i four I dot metal, it's isolate based, it's pretty awesome. It's the latest and greatest, all the speeds and feeds that we would expect from VMware and aws. At this point in our relationship. We announced two different storage options. This notion of working from customer feedback, allowing customers even more price reductions, really take off that storage and park it externally, right? And you know, separate that from compute. So two different storage offerings there. One is with AWS Fsx, with NetApp on tap, which brings in our NetApp partnership as well into the equation and really get that NetApp based, really excited about this offering as well. >>And the second storage offering for VMware cloud Flex Storage, VMware's own managed storage offering. Beyond that, we have done a lot of other innovations as well. I really wanted to talk about VMware cloud Flex Compute, where previously customers could only scale by hosts and a host is 36 to 48 cores, give or take. But with VMware cloud Flex Compute, we are now allowing this notion of a resource defined compute model where customers can just get exactly the V C P memory and storage that maps to the applications, however small they might be. So this notion of granularity is really a big innovation that that we are launching in the market this year. And then last but not least, talk about ransomware. Of course it's a hot topic in industry. We are seeing many, many customers ask for this. We are happy to announce a new ransomware recovery with our VMware cloud DR solution. >>A lot of innovation there and the way we are able to do machine learning and make sure the workloads that are covered from snapshots and backups are actually safe to use. So there's a lot of differentiation on that front as well. A lot of networking innovations with Project Knot star for ability to have layer flow through layer seven, you know, new SaaS services in that area as well. Keep in mind that the service already supports managed Kubernetes for containers. It's built in to the same clusters that have virtual machines. And so this notion of a single service with a great TCO for VMs and containers and sort of at the heart of our office, >>The networking side certainly is a hot area to keep innovating on. Every year it's the same, same conversation, get better, faster networking, more, more options there. The flex computes. Interesting. If you don't mind me getting a quick clarification, could you explain the Drew screen resource defined versus hardware defined? Because this is kind of what we had saw at Explore coming out, that notion of resource defined versus hardware defined. What's the, what does that mean? >>Yeah, I mean I think we have been super successful in this hardware defined notion. We we're scaling by the hardware unit that we present as software defined data centers, right? And so that's been super successful. But we, you know, customers wanted more, especially customers in different parts of the world wanted to start even smaller and grow even more incrementally, right? Lower their costs even more. And so this is the part where resource defined starts to be very, very interesting as a way to think about, you know, here's my bag of resources exactly based on what the customers request for fiber machines, five containers, its size exactly for that. And then as utilization grows, we elastically behind the scenes, we're able to grow it through policies. So that's a whole different dimension. It's a whole different service offering that adds value and customers are comfortable. They can go from one to the other, they can go back to that post based model if they so choose to. And there's a jump off point across these two different economic models. >>It's kind of cloud of flexibility right there. I like the name Fred. Let's get into some of the examples of customers, if you don't mind. Let's get into some of the ex, we have some time. I wanna unpack a little bit of what's going on with the customer deployments. One of the things we've heard again on the cube is from customers is they like the clarity of the relationship, they love the cloud positioning of it. And then what happens is they lift and shift the workloads and it's like, feels great. It's just like we're running VMware on AWS and then they would start consuming higher level services, kind of that adoption next level happens and because it it's in the cloud, so, So can you guys take us through some recent examples of customer wins or deployments where they're using VMware cloud on AWS on getting started, and then how do they progress once they're there? How does it evolve? Can you just walk us through a couple of use cases? >>Sure. There's a, well there's a couple. One, it's pretty interesting that, you know, like you said, as there's more and more bits you need better and better hardware and networking. And we're super excited about the I four and the capabilities there in terms of doubling and or tripling what we're doing around a lower variability on latency and just improving all the speeds. But what customers are doing with it, like the college in New Jersey, they're accelerating their deployment on a, on onboarding over like 7,400 students over a six to eight month period. And they've really realized a ton of savings. But what's interesting is where and how they can actually grow onto additional native services too. So connectivity to any other services is available as they start to move and migrate into this. The, the options there obviously are tied to all the innovation that we have across any services, whether it's containerized and with what they're doing with Tanu or with any other container and or services within aws. >>So there's, there's some pretty interesting scenarios where that data and or the processing, which is moved quickly with full compliance, whether it's in like healthcare or regulatory business is, is allowed to then consume and use things, for example, with tech extract or any other really cool service that has, you know, monthly and quarterly innovations. So there's things that you just can't, could not do before that are coming out and saving customers money and building innovative applications on top of their, their current app base in, in a rapid fashion. So pretty excited about it. There's a lot of examples. I think I probably don't have time to go into too, too many here. Yeah. But that's actually the best part is listening to customers and seeing how many net new services and new applications are they actually building on top of this platform. >>Nora, what's your perspective from the VMware sy? So, you know, you guys have now a lot of headroom to offer customers with Amazon's, you know, higher level services and or whatever's homegrown where's being rolled out? Cuz you now have a lot of hybrid too, so, so what's your, what's your take on what, what's happening in with customers? >>I mean, it's been phenomenal, the, the customer adoption of this and you know, banks and many other highly sensitive verticals are running production grade applications, tier one applications on the service over the last five years. And so, you know, I have a couple of really good examples. S and p Global is one of my favorite examples. Large bank, they merge with IHS market, big sort of conglomeration. Now both customers were using VMware cloud and AWS in different ways. And with the, with the use case, one of their use cases was how do I just respond to these global opportunities without having to invest in physical data centers? And then how do I migrate and consolidate all my data centers across the global, which there were many. And so one specific example for this company was how they migrated thousand 1000 workloads to VMware cloud AWS in just six weeks. Pretty phenomenal. If you think about everything that goes into a cloud migration process, people process technology and the beauty of the technology going from VMware point A to VMware point B, the the lowest cost, lowest risk approach to adopting VMware, VMware cloud, and aws. So that's, you know, one of my favorite examples. There are many other examples across other verticals that we continue to see. The good thing is we are seeing rapid expansion across the globe that constantly entering new markets with the limited number of regions and progressing our roadmap there. >>Yeah, it's great to see, I mean the data center migrations go from months, many, many months to weeks. It's interesting to see some of those success stories. So congratulations. One >>Of other, one of the other interesting fascinating benefits is the sustainability improvement in terms of being green. So the efficiency gains that we have both in current generation and new generation processors and everything that we're doing to make sure that when a customer can be elastic, they're also saving power, which is really critical in a lot of regions worldwide at this point in time. They're, they're seeing those benefits. If you're running really inefficiently in your own data center, that is just a, not a great use of power. So the actual calculators and the benefits to these workloads is, are pretty phenomenal just in being more green, which I like. We just all need to do our part there. And, and this is a big part of it here. >>It's a huge, it's a huge point about the sustainability. Fred, I'm glad you called that out. The other one I would say is supply chain issues. Another one you see that constrains, I can't buy hardware. And the third one is really obvious, but no one really talks about it. It's security, right? I mean, I remember interviewing Stephen Schmidt with that AWS and many years ago, this is like 2013, and you know, at that time people were saying the cloud's not secure. And he's like, listen, it's more secure in the cloud on premise. And if you look at the security breaches, it's all about the on-premise data center vulnerabilities, not so much hardware. So there's a lot you gotta to stay current on, on the isolation there is is hard. So I think, I think the security and supply chain, Fred is, is another one. Do you agree? >>I I absolutely agree. It's, it's hard to manage supply chain nowadays. We put a lot of effort into that and I think we have a great ability to forecast and make sure that we can lean in and, and have the resources that are available and run them, run them more efficiently. Yeah, and then like you said on the security point, security is job one. It is, it is the only P one. And if you think of how we build our infrastructure from Nitro all the way up and how we respond and work with our partners and our customers, there's nothing more important. >>And naron your point earlier about the managed service patching and being on top of things, it's really gonna get better. All right, final question. I really wanna thank you for your time on this showcase. It's really been a great conversation. Fred, you had made a comment earlier. I wanna kind of end with kind of a curve ball and put you eyes on the spot. We're talking about a modern, a new modern shift. It's another, we're seeing another inflection point, we've been documenting it, it's almost like cloud hitting another inflection point with application and open source growth significantly at the app layer. Continue to put a lot of pressure and, and innovation in the infrastructure side. So the question is for you guys each to answer is what's the same and what's different in today's market? So it's kind of like we want more of the same here, but also things have changed radically and better here. What are the, what's, what's changed for the better and where, what's still the same kind of thing hanging around that people are focused on? Can you share your perspective? >>I'll, I'll, I'll, I'll tackle it. You know, businesses are complex and they're often unique that that's the same. What's changed is how fast you can innovate. The ability to combine manage services and new innovative services and build new applications is so much faster today. Leveraging world class hardware that you don't have to worry about that's elastic. You, you could not do that even five, 10 years ago to the degree you can today, especially with innovation. So innovation is accelerating at a, at a rate that most people can't even comprehend and understand the, the set of services that are available to them. It's really fascinating to see what a one pizza team of of engineers can go actually develop in a week. It is phenomenal. So super excited about this space and it's only gonna continue to accelerate that. That's my take. All right. >>You got a lot of platform to compete on with, got a lot to build on then you're Ryan, your side, What's your, what's your answer to that question? >>I think we are seeing a lot of innovation with new applications that customers are constant. I think what we see is this whole notion of how do you go from desktop to production to the secure supply chain and how can we truly, you know, build on the agility that developers desire and build all the security and the pipelines to energize that motor production quickly and efficiently. I think we, we are seeing, you know, we are at the very start of that sort of of journey. Of course we have invested in Kubernetes the means to an end, but there's so much more beyond that's happening in industry. And I think we're at the very, very beginning of this transformations, enterprise transformation that many of our customers are going through and we are inherently part of it. >>Yeah. Well gentlemen, I really appreciate that we're seeing the same thing. It's more the same here on, you know, solving these complexities with distractions. Whether it's, you know, higher level services with large scale infrastructure at, at your fingertips. Infrastructures, code, infrastructure to be provisioned, serverless, all the good stuff happen in Fred with AWS on your side. And we're seeing customers resonate with this idea of being an operator, again, being a cloud operator and developer. So the developer ops is kind of, DevOps is kind of changing too. So all for the better. Thank you for spending the time and we're seeing again, that traction with the VMware customer base and of us getting, getting along great together. So thanks for sharing your perspectives, >>I appreciate it. Thank you so >>Much. Okay, thank you John. Okay, this is the Cube and AWS VMware showcase, accelerating business transformation. VMware cloud on aws, jointly engineered solution, bringing innovation to the VMware customer base, going to the cloud and beyond. I'm John Fur, your host. Thanks for watching. Hello everyone. Welcome to the special cube presentation of accelerating business transformation on vmc on aws. I'm John Furrier, host of the Cube. We have dawan director of global sales and go to market for VMware cloud on adb. This is a great showcase and should be a lot of fun. Ashish, thanks for coming on. >>Hi John. Thank you so much. >>So VMware cloud on AWS has been well documented as this big success for VMware and aws. As customers move their workloads into the cloud, IT operations of VMware customers has signaling a lot of change. This is changing the landscape globally is on cloud migration and beyond. What's your take on this? Can you open this up with the most important story around VMC on aws? >>Yes, John. The most important thing for our customers today is the how they can safely and swiftly move their ID infrastructure and applications through cloud. Now, VMware cloud AWS is a service that allows all vSphere based workloads to move to cloud safely, swiftly and reliably. Banks can move their core, core banking platforms, insurance companies move their core insurance platforms, telcos move their goss, bss, PLA platforms, government organizations are moving their citizen engagement platforms using VMC on aws because this is one platform that allows you to move it, move their VMware based platforms very fast. Migrations can happen in a matter of days instead of months. Extremely securely. It's a VMware manage service. It's very secure and highly reliably. It gets the, the reliability of the underlyings infrastructure along with it. So win-win from our customers perspective. >>You know, we reported on this big news in 2016 with Andy Chas, the, and Pat Geling at the time, a lot of people said it was a bad deal. It turned out to be a great deal because not only could VMware customers actually have a cloud migrate to the cloud, do it safely, which was their number one concern. They didn't want to have disruption to their operations, but also position themselves for what's beyond just shifting to the cloud. So I have to ask you, since you got the finger on the pulse here, what are we seeing in the market when it comes to migrating and modern modernizing in the cloud? Because that's the next step. They go to the cloud, you guys have done that, doing it, then they go, I gotta modernize, which means kind of upgrading or refactoring. What's your take on that? >>Yeah, absolutely. Look, the first step is to help our customers assess their infrastructure and licensing and entire ID operations. Once we've done the assessment, we then create their migration plans. A lot of our customers are at that inflection point. They're, they're looking at their real estate, ex data center, real estate. They're looking at their contracts with colocation vendors. They really want to exit their data centers, right? And VMware cloud and AWS is a perfect solution for customers who wanna exit their data centers, migrate these applications onto the AWS platform using VMC on aws, get rid of additional real estate overheads, power overheads, be socially and environmentally conscious by doing that as well, right? So that's the migration story, but to your point, it doesn't end there, right? Modernization is a critical aspect of the entire customer journey as as well customers, once they've migrated their ID applications and infrastructure on cloud get access to all the modernization services that AWS has. They can correct easily to our data lake services, to our AIML services, to custom databases, right? They can decide which applications they want to keep and which applications they want to refactor. They want to take decisions on containerization, make decisions on service computing once they've come to the cloud. But the most important thing is to take that first step. You know, exit data centers, come to AWS using vmc or aws, and then a whole host of modernization options available to them. >>Yeah, I gotta say, we had this right on this, on this story, because you just pointed out a big thing, which was first order of business is to make sure to leverage the on-prem investments that those customers made and then migrate to the cloud where they can maintain their applications, their data, their infrastructure operations that they're used to, and then be in position to start getting modern. So I have to ask you, how are you guys specifically, or how is VMware cloud on s addressing these needs of the customers? Because what happens next is something that needs to happen faster. And sometimes the skills might not be there because if they're running old school, IT ops now they gotta come in and jump in. They're gonna use a data cloud, they're gonna want to use all kinds of machine learning, and there's a lot of great goodness going on above the stack there. So as you move with the higher level services, you know, it's a no brainer, obviously, but they're not, it's not yesterday's higher level services in the cloud. So how are, how is this being addressed? >>Absolutely. I think you hit up on a very important point, and that is skills, right? When our customers are operating, some of the most critical applications I just mentioned, core banking, core insurance, et cetera, they're most of the core applications that our customers have across industries, like even, even large scale ERP systems, they're actually sitting on VMware's vSphere platform right now. When the customer wants to migrate these to cloud, one of the key bottlenecks they face is skill sets. They have the trained manpower for these core applications, but for these high level services, they may not, right? So the first order of business is to help them ease this migration pain as much as possible by not wanting them to, to upscale immediately. And we VMware cloud and AWS exactly does that. I mean, you don't have to do anything. You don't have to create new skill set for doing this, right? Their existing skill sets suffice, but at the same time, it gives them that, that leeway to build that skills roadmap for their team. DNS is invested in that, right? Yes. We want to help them build those skills in the high level services, be it aml, be it, be it i t be it data lake and analytics. We want to invest in them, and we help our customers through that. So that ultimately the ultimate goal of making them drop data is, is, is a front and center. >>I wanna get into some of the use cases and success stories, but I want to just reiterate, hit back your point on the skill thing. Because if you look at what you guys have done at aws, you've essentially, and Andy Chassey used to talk about this all the time when I would interview him, and now last year Adam was saying the same thing. You guys do all the heavy lifting, but if you're a VMware customer user or operator, you are used to things. You don't have to be relearn to be a cloud architect. Now you're already in the game. So this is like almost like a instant path to cloud skills for the VMware. There's hundreds of thousands of, of VMware architects and operators that now instantly become cloud architects, literally overnight. Can you respond to that? Do you agree with that? And then give an example. >>Yes, absolutely. You know, if you have skills on the VMware platform, you know, know, migrating to AWS using via by cloud and AWS is absolutely possible. You don't have to really change the skills. The operations are exactly the same. The management systems are exactly the same. So you don't really have to change anything but the advantages that you get access to all the other AWS services. So you are instantly able to integrate with other AWS services and you become a cloud architect immediately, right? You are able to solve some of the critical problems that your underlying IT infrastructure has immediately using this. And I think that's a great value proposition for our customers to use this service. >>And just one more point, I want just get into something that's really kind of inside baseball or nuanced VMC or VMware cloud on AWS means something. Could you take a minute to explain what on AWS means? Just because you're like hosting and using Amazon as a, as a work workload? Being on AWS means something specific in your world, being VMC on AWS mean? >>Yes. This is a great question, by the way, You know, on AWS means that, you know, VMware's vse platform is, is a, is an iconic enterprise virtualization software, you know, a disproportionately high market share across industries. So when we wanted to create a cloud product along with them, obviously our aim was for them, for the, for this platform to have the goodness of the AWS underlying infrastructure, right? And, and therefore, when we created this VMware cloud solution, it it literally use the AWS platform under the eighth, right? And that's why it's called a VMs VMware cloud on AWS using, using the, the, the wide portfolio of our regions across the world and the strength of the underlying infrastructure, the reliability and, and, and sustainability that it offers. And therefore this product is called VMC on aws. >>It's a distinction I think is worth noting, and it does reflect engineering and some levels of integration that go well beyond just having a SaaS app and, and basically platform as a service or past services. So I just wanna make sure that now super cloud, we'll talk about that a little bit in another interview, but I gotta get one more question in before we get into the use cases and customer success stories is in, in most of the VM world, VMware world, in that IT world, it used to, when you heard migration, people would go, Oh my God, that's gonna take months. And when I hear about moving stuff around and doing cloud native, the first reaction people might have is complexity. So two questions for you before we move on to the next talk. Track complexity. How are you addressing the complexity issue and how long these migrations take? Is it easy? Is it it hard? I mean, you know, the knee jerk reaction is month, You're very used to that. If they're dealing with Oracle or other old school vendors, like, they're, like the old guard would be like, takes a year to move stuff around. So can you comment on complexity and speed? >>Yeah. So the first, first thing is complexity. And you know, what makes what makes anything complex is if you're, if you're required to acquire new skill sets or you've gotta, if you're required to manage something differently, and as far as VMware cloud and AWS on both these aspects, you don't have to do anything, right? You don't have to acquire new skill sets. Your existing idea operation skill sets on, on VMware's platforms are absolutely fine and you don't have to manage it any differently like, than what you're managing your, your ID infrastructure today. So in both these aspects, it's exactly the same and therefore it is absolutely not complex as far as, as far as, as far as we cloud and AWS is concerned. And the other thing is speed. This is where the huge differentiation is. You have seen that, you know, large banks and large telcos have now moved their workloads, you know, literally in days instead of months. >>Because because of VMware cloud and aws, a lot of time customers come to us with specific deadlines because they want to exit their data centers on a particular date. And what happens, VMware cloud and AWS is called upon to do that migration, right? So speed is absolutely critical. The reason is also exactly the same because you are using the exactly the same platform, the same management systems, people are available to you, you're able to migrate quickly, right? I would just reference recently we got an award from President Zelensky of Ukraine for, you know, migrating their entire ID digital infrastructure and, and that that happened because they were using VMware cloud database and happened very swiftly. >>That's been a great example. I mean, that's one political, but the economic advantage of getting outta the data center could be national security. You mentioned Ukraine, I mean Oscar see bombing and death over there. So clearly that's a critical crown jewel for their running their operations, which is, you know, you know, world mission critical. So great stuff. I love the speed thing. I think that's a huge one. Let's get into some of the use cases. One of them is, the first one I wanted to talk about was we just hit on data, data center migration. It could be financial reasons on a downturn or our, or market growth. People can make money by shifting to the cloud, either saving money or making money. You win on both sides. It's a, it's a, it's almost a recession proof, if you will. Cloud is so use case for number one data center migration. Take us through what that looks like. Give an example of a success. Take us through a day, day in the life of a data center migration in, in a couple minutes. >>Yeah. You know, I can give you an example of a, of a, of a large bank who decided to migrate, you know, their, all their data centers outside their existing infrastructure. And they had, they had a set timeline, right? They had a set timeline to migrate the, the, they were coming up on a renewal and they wanted to make sure that this set timeline is met. We did a, a complete assessment of their infrastructure. We did a complete assessment of their IT applications, more than 80% of their IT applications, underlying v vSphere platform. And we, we thought that the right solution for them in the timeline that they wanted, right, is VMware cloud ands. And obviously it was a large bank, it wanted to do it safely and securely. It wanted to have it completely managed, and therefore VMware cloud and aws, you know, ticked all the boxes as far as that is concerned. >>I'll be happy to report that the large bank has moved to most of their applications on AWS exiting three of their data centers, and they'll be exiting 12 more very soon. So that's a great example of, of, of the large bank exiting data centers. There's another Corolla to that. Not only did they manage to manage to exit their data centers and of course use and be more agile, but they also met their sustainability goals. Their board of directors had given them goals to be carbon neutral by 2025. They found out that 35% of all their carbon foot footprint was in their data centers. And if they moved their, their ID infrastructure to cloud, they would severely reduce the, the carbon footprint, which is 35% down to 17 to 18%. Right? And that meant their, their, their, their sustainability targets and their commitment to the go to being carbon neutral as well. >>And that they, and they shift that to you guys. Would you guys take that burden? A heavy lifting there and you guys have a sustainability story, which is a whole nother showcase in and of itself. We >>Can Exactly. And, and cause of the scale of our, of our operations, we are able to, we are able to work on that really well as >>Well. All right. So love the data migration. I think that's got real proof points. You got, I can save money, I can, I can then move and position my applications into the cloud for that reason and other reasons as a lot of other reasons to do that. But now it gets into what you mentioned earlier was, okay, data migration, clearly a use case and you laid out some successes. I'm sure there's a zillion others. But then the next step comes, now you got cloud architects becoming minted every, and you got managed services and higher level services. What happens next? Can you give us an example of the use case of the modernization around the NextGen workloads, NextGen applications? We're starting to see, you know, things like data clouds, not data warehouses. We're not gonna data clouds, it's gonna be all kinds of clouds. These NextGen apps are pure digital transformation in action. Take us through a use case of how you guys make that happen with a success story. >>Yes, absolutely. And this is, this is an amazing success story and the customer here is s and p global ratings. As you know, s and p global ratings is, is the world leader as far as global ratings, global credit ratings is concerned. And for them, you know, the last couple of years have been tough as far as hardware procurement is concerned, right? The pandemic has really upended the, the supply chain. And it was taking a lot of time to procure hardware, you know, configure it in time, make sure that that's reliable and then, you know, distribute it in the wide variety of, of, of offices and locations that they have. And they came to us. We, we did, again, a, a, a alar, a fairly large comprehensive assessment of their ID infrastructure and their licensing contracts. And we also found out that VMware cloud and AWS is the right solution for them. >>So we worked there, migrated all their applications, and as soon as we migrated all their applications, they got, they got access to, you know, our high level services be our analytics services, our machine learning services, our, our, our, our artificial intelligence services that have been critical for them, for their growth. And, and that really is helping them, you know, get towards their next level of modern applications. Right Now, obviously going forward, they will have, they will have the choice to, you know, really think about which applications they want to, you know, refactor or which applications they want to go ahead with. That is really a choice in front of them. And, but you know, the, we VMware cloud and AWS really gave them the opportunity to first migrate and then, you know, move towards modernization with speed. >>You know, the speed of a startup is always the kind of the Silicon Valley story where you're, you know, people can make massive changes in 18 months, whether that's a pivot or a new product. You see that in startup world. Now, in the enterprise, you can see the same thing. I noticed behind you on your whiteboard, you got a slogan that says, are you thinking big? I know Amazon likes to think big, but also you work back from the customers and, and I think this modern application thing's a big deal because I think the mindset has always been constrained because back before they moved to the cloud, most IT, and, and, and on-premise data center shops, it's slow. You gotta get the hardware, you gotta configure it, you gotta, you gotta stand it up, make sure all the software is validated on it, and loading a database and loading oss, I mean, mean, yeah, it got easier and with scripting and whatnot, but when you move to the cloud, you have more scale, which means more speed, which means it opens up their capability to think differently and build product. What are you seeing there? Can you share your opinion on that epiphany of, wow, things are going fast, I got more time to actually think about maybe doing a cloud native app or transforming this or that. What's your, what's your reaction to that? Can you share your opinion? >>Well, ultimately we, we want our customers to utilize, you know, most of our modern services, you know, applications should be microservices based. When desired, they should use serverless applic. So list technology, they should not have monolithic, you know, relational database contracts. They should use custom databases, they should use containers when needed, right? So ultimately, we want our customers to use these modern technologies to make sure that their IT infrastructure, their licensing, their, their entire IT spend is completely native to cloud technologies. They work with the speed of a startup, but it's important for them to, to, to get to the first step, right? So that's why we create this journey for our customers, where you help them migrate, give them time to build the skills, they'll help them mo modernize, take our partners along with their, along with us to, to make sure that they can address the need for our customers. That's, that's what our customers need today, and that's what we are working backwards from. >>Yeah, and I think that opens up some big ideas. I'll just say that the, you know, we're joking, I was joking the other night with someone here in, in Palo Alto around serverless, and I said, you know, soon you're gonna hear words like architectural list. And that's a criticism on one hand, but you might say, Hey, you know, if you don't really need an architecture, you know, storage lists, I mean, at the end of the day, infrastructure is code means developers can do all the it in the coding cycles and then make the operations cloud based. And I think this is kind of where I see the dots connecting. Final thought here, take us through what you're thinking around how this new world is evolving. I mean, architecturals kind of a joke, but the point is, you know, you have to some sort of architecture, but you don't have to overthink it. >>Totally. No, that's a great thought, by the way. I know it's a joke, but it's a great thought because at the end of the day, you know, what do the customers really want? They want outcomes, right? Why did service technology come? It was because there was an outcome that they needed. They didn't want to get stuck with, you know, the, the, the real estate of, of a, of a server. They wanted to use compute when they needed to, right? Similarly, what you're talking about is, you know, outcome based, you know, desire of our customers and, and, and that's exactly where the word is going to, Right? Cloud really enforces that, right? We are actually, you know, working backwards from a customer's outcome and using, using our area the breadth and depth of our services to, to deliver those outcomes, right? And, and most of our services are in that path, right? When we use VMware cloud and aws, the outcome is a, to migrate then to modernize, but doesn't stop there, use our native services, you know, get the business outcomes using this. So I think that's, that's exactly what we are going through >>Actually, should actually, you're the director of global sales and go to market for VMware cloud on Aus. I wanna thank you for coming on, but I'll give you the final minute. Give a plug, explain what is the VMware cloud on Aus, Why is it great? Why should people engage with you and, and the team, and what ultimately is this path look like for them going forward? >>Yeah. At the end of the day, we want our customers to have the best paths to the cloud, right? The, the best path to the cloud is making sure that they migrate safely, reliably, and securely as well as with speed, right? And then, you know, use that cloud platform to, to utilize AWS's native services to make sure that they modernize their IT infrastructure and applications, right? We want, ultimately that our customers, customers, customer get the best out of, you know, utilizing the, that whole application experience is enhanced tremendously by using our services. And I think that's, that's exactly what we are working towards VMware cloud AWS is, is helping our customers in that journey towards migrating, modernizing, whether they wanna exit a data center or whether they wanna modernize their applications. It's a essential first step that we wanna help our customers with >>One director of global sales and go to market with VMware cloud on neighbors. He's with aws sharing his thoughts on accelerating business transformation on aws. This is a showcase. We're talking about the future path. We're talking about use cases with success stories from customers as she's thank you for spending time today on this showcase. >>Thank you, John. I appreciate it. >>Okay. This is the cube, special coverage, special presentation of the AWS Showcase. I'm John Furrier, thanks for watching.
SUMMARY :
Great to have you and Daniel Re Myer, principal architect global AWS synergy Greatly appreciate it. You're starting to see, you know, this idea of higher level services, More recently, one of the things to keep in mind is we're looking to deliver value Then the other thing comes down to is where we Daniel, I wanna get to you in a second. lot of CPU power, such as you mentioned it, AI workloads. composing, you know, with open source, a lot of great things are changing. So we want to have all of that as a service, on what you see there from an Amazon perspective and how it relates to this? And you know, look at it from the point of view where we said this to leverage a cloud, but the investment that you made and certain things as far How would you talk to that persona about the future And that also means in, in to to some extent, concerns with your I can still run my job now I got goodness on the other side. on the skills, you certainly have that capability to do so. Now that we're peeking behind the curtain here, I'd love to have you guys explain, You always have to have the time difference in mind if we are working globally together. I mean it seems to be very productive, you know, I think one of the key things to keep in mind is, you know, even if you look at AWS's guys to comment on, as you guys continue to evolve the relationship, what's in it for So one of the most important things we have announced this year, Yeah, I think one of the key things to keep in mind is, you know, we're looking to help our customers You know, we have a product, you have a product, biz dev deals happen, people sign relationships and they do business And this, you guys are in the middle of two big ecosystems. You can do this if you decide you want to stay with some of your services But partners innovate with you on their terms. I think one of the key things, you know, as Daniel mentioned before, You still run the fear, the way you working on it and And if, if you look, not every, And thank you for spending the time. So personally for me as an IT background, you know, been in CIS admin world and whatnot, thank you for coming on on this part of the showcase episode of really the customer successes with VMware we're kind of not really on board with kind of the vision, but as it played out as you guys had announced together, across all the regions, you know, that was a big focus because there was so much demand for We invented this pretty awesome feature called Stretch clusters, where you could stretch a And I think one of the things that you mentioned was how the advantages you guys got from that and move when you take the, the skill set that they're familiar with and the advanced capabilities that I have to ask you guys both as you guys see this going to the next level, you know, having a very, very strong engineering partnership at that level. put even race this issue to us, we sent them a notification saying we And as you grow your solutions, there's more bits. the app layer, as you think about some of the other workloads like sap, we'll go end to What's been the feedback there? which is much, much easier with VMware cloud aws, you know, they wanna see more action, you know, as as cloud kind of continues to And you know, separate that from compute. And the second storage offering for VMware cloud Flex Storage, VMware's own managed storage you know, new SaaS services in that area as well. If you don't mind me getting a quick clarification, could you explain the Drew screen resource defined versus But we, you know, because it it's in the cloud, so, So can you guys take us through some recent examples of customer The, the options there obviously are tied to all the innovation that we So there's things that you just can't, could not do before I mean, it's been phenomenal, the, the customer adoption of this and you know, Yeah, it's great to see, I mean the data center migrations go from months, many, So the actual calculators and the benefits So there's a lot you gotta to stay current on, Yeah, and then like you said on the security point, security is job one. So the question is for you guys each to Leveraging world class hardware that you don't have to worry production to the secure supply chain and how can we truly, you know, Whether it's, you know, higher level services with large scale Thank you so I'm John Furrier, host of the Cube. Can you open this up with the most important story around VMC on aws? platform that allows you to move it, move their VMware based platforms very fast. They go to the cloud, you guys have done that, So that's the migration story, but to your point, it doesn't end there, So as you move with the higher level services, So the first order of business is to help them ease Because if you look at what you guys have done at aws, the advantages that you get access to all the other AWS services. Could you take a minute to explain what on AWS on AWS means that, you know, VMware's vse platform is, I mean, you know, the knee jerk reaction is month, And you know, what makes what the same because you are using the exactly the same platform, the same management systems, which is, you know, you know, world mission critical. decided to migrate, you know, their, So that's a great example of, of, of the large bank exiting data And that they, and they shift that to you guys. And, and cause of the scale of our, of our operations, we are able to, We're starting to see, you know, things like data clouds, And for them, you know, the last couple of years have been tough as far as hardware procurement is concerned, And, and that really is helping them, you know, get towards their next level You gotta get the hardware, you gotta configure it, you gotta, you gotta stand it up, most of our modern services, you know, applications should be microservices based. I mean, architecturals kind of a joke, but the point is, you know, the end of the day, you know, what do the customers really want? I wanna thank you for coming on, but I'll give you the final minute. customers, customer get the best out of, you know, utilizing the, One director of global sales and go to market with VMware cloud on neighbors. I'm John Furrier, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Samir | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Maryland | LOCATION | 0.99+ |
Pat Geling | PERSON | 0.99+ |
John Foer | PERSON | 0.99+ |
Andy Chassey | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Andy Jessey | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Daniel Re Myer | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
Fred | PERSON | 0.99+ |
Samir Daniel | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Stephen Schmidt | PERSON | 0.99+ |
Danielle | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Samia | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
Andy Chas | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
36 | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two questions | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Nora | PERSON | 0.99+ |
Evolving InfluxDB into the Smart Data Platform Full Episode
>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SUMMARY :
we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
Tim Yoakum | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tim Yokum | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tim | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
16 times | QUANTITY | 0.99+ |
two rows | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
60,000 people | QUANTITY | 0.99+ |
Rust | TITLE | 0.99+ |
Influx | ORGANIZATION | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
three experts | QUANTITY | 0.99+ |
InfluxDB | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
each row | QUANTITY | 0.99+ |
two lane | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Noble nine | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Flux | ORGANIZATION | 0.99+ |
Influx DB | TITLE | 0.99+ |
each column | QUANTITY | 0.99+ |
270 terabytes | QUANTITY | 0.99+ |
cube.net | OTHER | 0.99+ |
twice | QUANTITY | 0.99+ |
Bryan | PERSON | 0.99+ |
Pandas | TITLE | 0.99+ |
c plus plus | TITLE | 0.99+ |
three years ago | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
more than a decade | QUANTITY | 0.98+ |
Apache | ORGANIZATION | 0.98+ |
dozens | QUANTITY | 0.98+ |
free@influxdbu.com | OTHER | 0.98+ |
30,000 feet | QUANTITY | 0.98+ |
Rust Foundation | ORGANIZATION | 0.98+ |
two temperature values | QUANTITY | 0.98+ |
In Flux Data | ORGANIZATION | 0.98+ |
one time stamp | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
Russ | PERSON | 0.98+ |
IOT | ORGANIZATION | 0.98+ |
Evolving InfluxDB | TITLE | 0.98+ |
first | QUANTITY | 0.97+ |
Influx data | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
Influx DB University | ORGANIZATION | 0.97+ |
SQL | TITLE | 0.97+ |
The Cube | TITLE | 0.96+ |
Influx DB Cloud | TITLE | 0.96+ |
single server | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
Brian Gracely & Idit Levine, Solo.io | KubeCon CloudNativeCon NA 2022
(bright upbeat music) >> Welcome back to Detroit guys and girls. Lisa Martin here with John Furrier. We've been on the floor at KubeCon + CloudNativeCon North America for about two days now. We've been breaking news, we would have a great conversations, John. We love talking with CUBE alumni whose companies are just taking off. And we get to do that next again. >> Well, this next segment's awesome. We have former CUBE host, Brian Gracely, here who's an executive in this company. And then the entrepreneur who we're going to talk with. She was on theCUBE when it just started now they're extremely successful. It's going to be a great conversation. >> It is, Idit Levine is here, the founder and CEO of solo.io. And as John mentioned, Brian Gracely. You know Brian. He's the VP of Product Marketing and Product Strategy now at solo.io. Guys, welcome to theCUBE, great to have you here. >> Thanks for having us. >> Idit: Thank so much for having us. >> Talk about what's going on. This is a rocket ship that you're riding. I was looking at your webpage, you have some amazing customers. T-Mobile, BMW, Amex, for a marketing guy it must be like, this is just- >> Brian: Yeah, you can't beat it. >> Kid in a candy store. >> Brian: Can't beat it. >> You can't beat it. >> For giant companies like that, giant brands, global, to trust a company of our size it's trust, it's great engineering, it's trust, it's fantastic. >> Idit, talk about the fast trajectory of this company and how you've been able to garner trust with such mass organizations in such a short time period. >> Yes, I think that mainly is just being the best. Honestly, that's the best approach I can say. The team that we build, honestly, and this is a great example of one of them, right? And we're basically getting the best people in the industry. So that's helpful a lot. We are very, very active on the open source community. So basically it building it, anyway, and by doing this they see us everywhere. They see our success. You're starting with a few customers, they're extremely successful and then you're just creating this amazing partnership with them. So we have a very, very unique way we're working with them. >> So hard work, good code. >> Yes. >> Smart people, experience. >> That's all you need. >> It's simple, why doesn't everyone do it? >> It's really easy. (all laughing) >> All good, congratulations. It's been fun to watch you guys grow. Brian, great to see you kicking butt in this great company. I got to ask about the landscape because I love the ServiceMeshCon you guys had on a co-located event on day zero here as part of that program, pretty packed house. >> Brian: Yep. >> A lot of great feedback. This whole ServiceMesh and where it fits in. You got Kubernetes. What's the update? Because everything's kind of coming together- >> Brian: Right. >> It's like jello in the refrigerator it kind of comes together at the same time. Where are we? >> I think the easiest way to think about it is, and it kind of mirrors this event perfectly. So the last four or five years, all about Kubernetes, built Kubernetes. So every one of our customers are the ones who have said, look, for the last two or three years, we've been building Kubernetes, we've had a certain amount of success with it, they're building applications faster, they're deploying and then that success leads to new challenges, right? So we sort of call that first Kubernetes part sort of CloudNative 1.0, this and this show is really CloudNative 2.0. What happens after Kubernetes service mesh? Is that what happens after Kubernetes? And for us, Istio now being part of the CNCF, huge, standardized, people are excited about it. And then we think we are the best at doing Istio from a service mesh perspective. So it's kind of perfect, perfect equation. >> Well, I'll turn it on, listen to your great Cloud cast podcast, plug there for you. You always say what is it and what isn't it? >> Brian: Yeah. >> What is your product and what isn't it? >> Yeah, so our product is, from a purely product perspective it's service mesh and API gateway. We integrate them in a way that nobody else does. So we make it easier to deploy, easier to manage, easier to secure. I mean, those two things ultimately are, if it's an internal API or it's an external API, we secure it, we route it, we can observe it. So if anybody's, you're building modern applications, you need this stuff in order to be able to go to market, deploy at scale all those sort of things. >> Idit, talk about some of your customer conversations. What are the big barriers that they've had, or the challenges, that solo.io comes in and just wipes off the table? >> Yeah, so I think that a lot of them, as Brian described it, very, rarely they had a success with Kubernetes, maybe a few clusters, but then they basically started to on-ramp more application on those clusters. They need more cluster maybe they want multi-class, multi-cloud. And they mainly wanted to enable the team, right? This is why we all here, right? What we wanted to eventually is to take a piece of the infrastructure and delegate it to our customers which is basically the application team. So I think that that's where they started to see the problem because it's one thing to take some open source project and deploy it very little bit but the scale, it's all about the scale. How do you enable all those millions of developers basically working on your platform? How do you scale multi-cloud? What's going on if one of them is down, how do you fill over? So that's exactly the problem that they have >> Lisa: Which is critical for- >> As bad as COVID was as a global thing, it was an amazing enabler for us because so many companies had to say... If you're a retail company, your front door was closed, but you still wanted to do business. So you had to figure out, how do I do mobile? How do I be agile? If you were a company that was dealing with like used cars your number of hits were through the roof because regular cars weren't available. So we have all these examples of companies who literally overnight, COVID was their digital transformation enabler. >> Lisa: Yes. Yes. >> And the scale that they had to deal with, the agility they had to deal with, and we sort of fit perfectly in that. They re-looked at what's our infrastructure look like? What's our security look like? We just happened to be right place in the right time. >> And they had skillset issues- >> Skillsets. >> Yeah. >> And the remote work- >> Right, right. >> Combined with- >> Exactly. >> Modern upgrade gun-to-the-head, almost, kind of mentality. >> And we're really an interesting company. Most of the interactions we do with customers is through Slack, obviously it was remote. We would probably be a great Slack case study in terms of how to do business because our customers engage with us, with engineers all over the world, they look like one team. But we can get them up and running in a POC, in a demo, get them through their things really, really fast. It's almost like going to the public cloud, but at whatever complexity they want. >> John: Nice workflow. >> So a lot of momentum for you guys silver linings during COVID, which is awesome we do hear a lot of those stories of positive things, the acceleration of digital transformation, and how much, as consumers, we've all benefited from that. Do you have one example, Brian, as the VP of product marketing, of a customer that you really think in the last two years just is solo.io's value proposition on a platter? >> I'll give you one that I think everybody can understand. So most people, at least in the United States, you've heard of Chick-fil-A, retail, everybody likes the chicken. 2,600 stores in the US, they all shut down and their business model, it's good food but great personal customer experience. That customer experience went away literally overnight. So they went from barely anybody using the mobile application, and hence APIs in the backend, half their business now goes through that to the point where, A, they shifted their business, they shifted their customer experience, and they physically rebuilt 2,600 stores. They have two drive-throughs now that instead of one, because now they have an entire one dedicated to that mobile experience. So something like that happening overnight, you could never do the ROI for it, but it's changed who they are. >> Lisa: Absolutely transformative. >> So, things like that, that's an example I think everybody can kind of relate to. Stuff like that happened. >> Yeah. >> And I think that's also what's special is, honestly, you're probably using a product every day. You just don't know that, right? When you're swiping your credit card or when you are ordering food, or when you using your phone, honestly the amount of customer they were having, the space, it's like so, every industry- >> John: How many customers do you have? >> I think close to 200 right now. >> Brian: Yeah. >> Yeah. >> How many employees, can you gimme some stats? Funding, employees? What's the latest statistics? >> We recently found a year ago $135 million for a billion dollar valuation. >> Nice. >> So we are a unicorn. I think when you took it we were around like 50 ish people. Right now we probably around 180, and we are growing, we probably be 200 really, really quick. And I think that what's really, really special as I said the interaction that we're doing with our customers, we're basically extending their team. So for each customer is basically a Slack channel. And then there is a lot of people, we are totally global. So we have people in APAC, in Australia, New Zealand, in Singapore we have in AMEA, in UK and in Spain and Paris, and other places, and of course all over US. >> So your use case on how to run a startup, scale up, during the pandemic, complete clean sheet of paper. >> Idit: We had to. >> And what happens, you got Slack channels as your customer service collaboration slash productivity. What else did you guys do differently that you could point to that's, I would call, a modern technique for an entrepreneurial scale? >> So I think that there's a few things that we are doing different. So first of all, in Solo, honestly, there is a few things that differentiated from, in my opinion, most of the companies here. Number one is look, you see this, this is a lot, a lot of new technology and one of the things that the customer is nervous the most is choosing the wrong one because we saw what happened, right? I don't know the orchestration world, right? >> John: So choosing and also integrating multiple things at the same time. >> Idit: Exactly. >> It's hard. >> And this is, I think, where Solo is expeditious coming to place. So I mean we have one team that is dedicated like open source contribution and working with all the open source community and I think we're really good at picking the right product and basically we're usually right, which is great. So if you're looking at Kubernetes, we went there for the beginning. If you're looking at something like service mesh Istio, we were all envoy proxy and out of process. So I think that by choosing these things, and now Cilium is something that we're also focusing on. I think that by using the right technology, first of all you know that it's very expensive to migrate from one to the other if you get it wrong. So I think that's one thing that is always really good at. But then once we actually getting those portal we basically very good at going and leading those community. So we are basically bringing the customers to the community itself. So we are leading this by being in the TOC members, right? The Technical Oversight Committee. And we are leading by actually contributing a lot. So if the customer needs something immediately, we will patch it for him and walk upstream. So that's kind of like the second thing. And the third one is innovation. And that's really important to us. So we pushing the boundaries. Ambient, that we announced a month ago with Google- >> And STO, the book that's out. >> Yes, the Ambient, it's basically a modern STO which is the future of SDL. We worked on it with Google and their NDA and we were listed last month. This is exactly an example of us basically saying we can do it better. We learn from our customers, which is huge. And now we know that we can do better. So this is the third thing, and the last one is the partnership. I mean honestly we are the extension team of the customer. We are there on Slack if they need something. Honestly, there is a reason why our renewal rate is 98.9 and our net extension is 135%. I mean customers are very, very happy. >> You deploy it, you make it right. >> Idit: Exactly, exactly. >> The other thing we did, and again this was during COVID, we didn't want to be a shell-for company. We didn't want to drop stuff off and you didn't know what to do with it. We trained nearly 10,000 people. We have something called Solo Academy, which is free, online workshops, they run all the time, people can come and get hands on training. So we're building an army of people that are those specialists that have that skill set. So we don't have to walk into shops and go like, well okay, I hope six months from now you guys can figure this stuff out. They're like, they've been doing that. >> And if their friends sees their friend, sees their friend. >> The other thing, and I got to figure out as a marketing person how to do this, we have more than a few handfuls of people that they've got promoted, they got promoted, they got promoted. We keep seeing people who deploy our technologies, who, because of this stuff they're doing- >> John: That's a good sign. They're doing it at at scale, >> John: That promoter score. >> They keep getting promoted. >> Yeah, that's amazing. >> That's a powerful sort of side benefit. >> Absolutely, that's a great thing to have for marketing. Last question before we ran out of time. You and I, Idit, were talking before we went live, your sessions here are overflowing. What's your overall sentiment of KubeCon 2022 and what feedback have you gotten from all the customers bursting at the seam to come talk to you guys? >> I think first of all, there was the pre-event which we had and it was a lot of fun. We talked to a lot of customer, most of them is 500, global successful company. So I think that people definitely... I will say that much. We definitely have the market feed, people interested in this. Brian described very well what we see here which is people try to figure out the CloudNative 2.0. So that's number one. The second thing is that there is a consolidation, which I like, I mean STO becoming right now a CNCF project I think it's a huge, huge thing for all the community. I mean, we're talking about all the big tweak cloud, we partner with them. I mean I think this is a big sign of we agree which I think is extremely important in this community. >> Congratulations on all your success. >> Thank you so much. >> And where can customers go to get their hands on this, solo.io? >> Solo.io? Yeah, absolutely. >> Awesome guys, this has been great. Congratulations on the momentum. >> Thank you. >> The rocket ship that you're riding. We know you got to get to the airport we're going to let you go. But we appreciate your insights and your time so much, thank you. >> Thank you so much. >> Thanks guys, we appreciate it. >> A pleasure. >> Thanks. >> For our guests and John Furrier, This is Lisa Martin live in Detroit, had to think about that for a second, at KubeCon 2022 CloudNativeCon. We'll be right back with our final guests of the day and then the show wraps, so stick around. (gentle music)
SUMMARY :
And we get to do that next again. It's going to be a great conversation. great to have you here. This is a rocket ship that you're riding. to trust a company of our size Idit, talk about the fast So we have a very, very unique way It's really easy. It's been fun to watch you guys grow. What's the update? It's like jello in the refrigerator So the last four or five years, listen to your great Cloud cast podcast, So we make it easier to deploy, What are the big barriers So that's exactly the So we have all these examples the agility they had to deal with, almost, kind of mentality. Most of the interactions So a lot of momentum for you guys and hence APIs in the backend, everybody can kind of relate to. honestly the amount of We recently found a year ago So we are a unicorn. So your use case on that you could point to and one of the things that the at the same time. So that's kind of like the second thing. and the last one is the partnership. So we don't have to walk into shops And if their friends sees and I got to figure out They're doing it at at scale, at the seam to come talk to you guys? We definitely have the market feed, to get their hands on this, solo.io? Yeah, absolutely. Congratulations on the momentum. But we appreciate your insights of the day and then the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
Amex | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Detroit | LOCATION | 0.99+ |
Paris | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
$135 million | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Idit Levine | PERSON | 0.99+ |
135% | QUANTITY | 0.99+ |
98.9 | QUANTITY | 0.99+ |
T-Mobile | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
200 | QUANTITY | 0.99+ |
New Zealand | LOCATION | 0.99+ |
last month | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
2,600 stores | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Chick-fil-A | ORGANIZATION | 0.99+ |
Istio | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
500 | QUANTITY | 0.99+ |
one team | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
each customer | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
one team | QUANTITY | 0.98+ |
a month ago | DATE | 0.97+ |
CloudNative 2.0 | TITLE | 0.97+ |
one example | QUANTITY | 0.97+ |
solo.io | ORGANIZATION | 0.97+ |
KubeCon 2022 | EVENT | 0.96+ |
Technical Oversight Committee | ORGANIZATION | 0.96+ |
nearly 10,000 people | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.96+ |
AMEA | LOCATION | 0.95+ |
pandemic | EVENT | 0.95+ |
CloudNative 1.0 | TITLE | 0.95+ |
Kubernetes | ORGANIZATION | 0.95+ |
COVID | TITLE | 0.94+ |
first | QUANTITY | 0.94+ |
Solo Academy | ORGANIZATION | 0.93+ |
ServiceMeshCon | EVENT | 0.92+ |
CNCF | ORGANIZATION | 0.92+ |
APAC | LOCATION | 0.92+ |
six months | QUANTITY | 0.92+ |
around 180 | QUANTITY | 0.92+ |
Cilium | ORGANIZATION | 0.92+ |
ServiceMesh | ORGANIZATION | 0.9+ |
Nick Barcet, Red Hat & Greg Forrest, Lockheed Martin | KubeCon + CloudNativeCon NA 2022
(lighthearted music) >> Hey all. Welcome back to theCube's coverage of Kubecon North America '22 CloudNativeCon. We're in Detroit. We've been here all day covering day one of the event from our perspective. Three days of coverage coming at you. Lisa Martin here with John Furrier. John, a lot of buzz today. A lot of talk about the maturation of Kubernetes with different services that vendors are offering. We talked a little bit about security earlier today. One of the things that is a hot topic is national security. >> Yeah, this is a huge segment we got coming up. It really takes that all that nerd talk about Kubernetes and puts it into action. We actually see demonstrable results. This is about advanced artificial intelligence for tactical decision making at the edge to support our military operations because a lot of the deaths are because of bad technology. And this has been talked about. We've been covering Silicon Angle, we wrote a story there now on this topic. This should be a really exciting segment so I'm really looking forward to it. >> Excellent, so am I. Please welcome back one of our alumni, Nick Barcet senior director, customer led open innovation at Red Hat. Great to have you back. Greg Forrest joins us as well from Lockheed Martin Director of AI Foundations. Guys, great to have you on the program. Nick, what's been your perception before we dig into the news and break that open of KubeCon 2022? >> So, KubeCon is always a wonderful event because we can see people working with us in the community developing new stuff, people that we see virtually all year. But it's the time at which we can really establish human contact and that's wonderful. And it's also the moments where we can make big topic move forward and the topics have been plenty at this KubeCon from MicroShift to KCP, to AI, to all domains have been covered. >> Greg, you're the director of AI foundations at Lockheed Martin. Obviously well known, contractors to the military lot of intellectual property, storied history. >> Greg: Sure. >> Talk about this announcement with Red Hat 'cause I think this is really indicative of what's happening at the edge. Data, compute, industrial equipment, and people, in this case lives are in danger or to preserve peace. This is a killer story in terms of understanding what this all means. What's your take on this relationship with Red Hat? What's the secret sauce? >> Yeah, it's really important for us. So part of our 21st century security strategy as a company is to partner with companies like Red Hat and Big Tech and bring the best of the commercial world into the Department of Defense for our soldiers on the ground. And that's exactly what we announced today or Tuesday in our partnership. And so the ability to take commercial products and utilize them in theater is really important for saving lives on the ground. And so we can go through exactly what we did as part of this demonstration, but we took MicroShift at the edge and we were able to run our AI payloads on that. That provided us with the ability to do things like AI based RF sensing, so radio frequency sensing. And we were also able to do computer vision based technologies at the edge. So we went out, we had a small UAV that went out and searched for a target on the ground. It found a target using its radio frequency capabilities, the RF capabilities. Then once we're able to hone in on that target, what Red Hat device edge and MicroShift enables us to do is actually then switch sensing modalities. And then we're able to look at this target via the camera and use computer vision-based technologies to actually more accurately locate the target and then track that target in real time. So that's one of the keys to be able to actually switch modalities in real time on one platform is really important for our joint all domain operations construct. The idea of how do you actually connect all of these assets in the environment, in the battle space. >> Talk about the challenge and how hard it is to do this. The back haul, you'll go back to the central server, bring data back, connecting things. What if there's insecurity around connectivity? I mean there's a lot of things going, can you just scope the magnitude of how hard it's to actually deploy something at a tactical edge? >> It is. There's a lot of data that comes from all of these sensors, whether they're RF sensors or EO or IR. We're working across multiple domains, right? And so we want to take that data back and train on that and then redeploy to the edge. And so with MicroShift, we're able to do that in a way that's robust, that's repeatable, and that's automated. And that really instills trust in us and our customers that when we deploy new software capabilities to the edge over the air, like we did in this demonstration that they're going to run right on the target hardware. And so that's a huge advantage to what we're doing here that when we push software to the edge in real time we know it's going to run. >> And in realtime is absolutely critical. We talk about it in so many different industries. Oh, it's customers expect realtime access whether it's your banking app or whatnot. But here we're talking about literally life and death situations on the battlefield. So that realtime data access is literally life and death. >> It's paramount to what we're doing. In this case, the aircraft started with one role which was to go find a radio frequency admitter and then switch roles to then go get cameras and eyes on that. So where is that coming from? Are there people on the ground? Are there dangerous people on the ground? And it gives the end user on the ground complete situational awareness of what is actually happening. And that is key for enhanced decision making. Enhanced decision making is critical to what we're doing. And so that's really where we're advancing this technology and where we can save lives. >> I read a report from General Mattis when he was in service that a lot of the deaths are due to not having enough information really at the edge. >> Greg: Friendly fire. >> Friendly fire, a lot of stuff that goes on there. So this is really, really important. Nick, you're sitting there saying this is great. My customer's talking about the product. This is your innovation, Red Hat device edge in action. This is real. This is industrial- >> So it's more than real. Actually this type of use case is what convinced us to transform a technology we had been working on which is a small form factor of Kubernetes to transform it into a product. Because sometimes, US engineers have a tendency to invent stuff that are great on paper, but it's a solution trying to find a problem. And we need customers to work with us to make sure that do solution do solve a real problem. And Lockheed was great. Worked with us upstream on that project. Helped us prove out that the concept was actually worth it and we waited until Lockheed had tested the concept in the air. >> Okay, so Red Hat device edge and MicroShift, explain that, how that works real quick for the folks that don't know. So one of the thing we learned is that Kubernetes is great but it's only part of the journey. In order to get those workloads on those aircraft or in order to get those workloads in a factory, you also need to consider the full life cycle of the device itself. And you don't handle a device that is inside of a UAV or inside of a factory the same way you handle a server. You have to deal with those devices in a way that is much more akin to a setup box. So we had to modify how the OS was behaving to deal with devices and we reduced what we had built in real for each edge aspect and combined it with MicroShift and that's what became with that Red Hat device edge. >> We're in a low SWAP environment, space, weight and power, right? Or very limited, We're on a small UAS in this demonstration. So the ability to spool up and spool down containers and to save computing power and to do that on demand and orchestrate that with MicroShift is paramount to what we're doing. We wouldn't be able to do it without that capability. >> John: That's awesome. >> I want to get both of your opinions. Nick, we'll start with you and then Greg we'll go to you. In terms of MicroShift , what is its superpower? What differentiates it from other competing solutions in the market? >> So MicroShift is Kubernetes but reduced to the strict minimum of a runtime version of Kubernetes so that it takes a minimal footprint so that we maximize the space available for the workload in those very constraints environments. On a board where you have eight or 16 gig of RAM, if you use only two gig of that to run the infrastructure component, you leave the rest for the AI workload that you need on the drone. And that's what is really important. >> And these AI payloads, the inference that we're doing at the edge is very compute intensive. So again, the ability to manage that and orchestrate that is paramount to running on these very small board computers. These are small drones that don't have a lot of weight that don't allow a lot of space. >> John: Got to be efficient >> And be efficient with it. >> How were you guys involved? Talk about the relationship. So you guys were tightly involved. Talk about the roles you guys played together. Was it co-development? Was it customer/partner? Talk about the relationship. >> Yeah, so we started actually with satellite. So you can think of small cube sets in a very similar environment to a low powered UAV. And it started there. And then in the last, I would say year or so, Nick we have worked together to develop MicroShift. We work closely on Slack channels together like we're part of the same team. >> John: That's great. >> And hey Red Hat, this is what we need, this is what we're looking for. These are the constraints that we have. And this team has been amazing and just delivered on everything that we've asked for. >> I mean this is really an example of the innovation at the edge, industrial edge specifically. You got an operating system, you got form factor challenges, you got operating parameters. And just to having that flex, you can't just take this and put it over there. >> But it's what really is a community applied to an industrial context. So what happened there is we worked as part of the MicroShift community together with a real time communication channel, the same slack that anybody developing Kubernetes uses we've been using to identify where the problems were, how to solve them, bring new ideas and that's how we tackle these problems. >> Yeah, a true open source model I mean the Red Hat and the Lockheed teams were in it together on a daily basis communicating like we were part of the same company. And and that's really how you move these things forward. >> Yeah, and of course open source is great but also you got to lock down the security. How did you guys handle that? What's going on with the security? 'Cause you got to make sure no take over the devices. >> So the funny thing is that even though what we produce is highly inclusive of security concern, our development model is completely open. So it's not security biopurification, it's security because we apply the best practices. >> John: You see everything. >> Absolutely. >> Yes. >> And then you harden it in the joint development, there it is. >> Yeah, but what we support, what we offer as a product is the same for Lockheed or for any other customer because there is no domain where security is not important. When you control the recognition on a drone or where you control the behavior of a robot in a factory, security is paramount because you can't immobilize a country by infecting a robot the same way you could immobilize a military operation- >> Greg: That's right. >> By infecting a UAV. >> Not to change the subject, but I got to go on a tangent here cause it pops in my head. You mentioned cube set, not related to theCUBE of course. Where theCube for the video. Cube sets are very powerful. People can launch space right now very inexpensively. So it's a highly contested and congested environment. Any space activity going on around the corner with you guys? 'Cause remember the world's not around, it's edge is now in space. Mars is the edge. >> That's right. >> Our first prototype for MicroShift was actually a cube set. >> Greg: That's where it started. >> And IBM project, the project called Endurance. That's the first time we actually put MicroShift into use. And that was a very interesting project, very early version of MicroShift . And now we have talks with many other people on reproducing that at more industrial level this was more like a cool high school project. >> But to your point, the scalability across different platforms is there. If we're running on top of MicroShift on this common OS, it just eases the development. Behind the scenes, we have a whole AI factory at Lockheed Martin where we have a common ecosystem for how we actually develop and deploy these algorithms to the edge. And now we've got a common ecosystem at the edge. And so it helps that whole process to be able to do that in automated ways, repeatable ways so we can instill trust in our DRD customer that the validation of verification of this is a really important aspect. >> John: Must be a fun place to work. >> It is, it's exciting. There's endless opportunities. >> You must get a lot of young kids applying for those jobs. They're barely into the whole. I mean, AI's a hot feel and people want to get their hands on real applications. I was serious about space. Is there space activity going on with you guys or is it just now military edge, not yet military space? Or is that classified? >> Yeah, so we're working across multiple fronts, absolutely. >> That's awesome. >> What excite, oh, sorry John. What excites you most, never a dull moment with what you're doing, but just the potential to enable a safer, a more secure world, what excites you most about this partnership and the direction and the we'll say the trajectory it's going on? >> Yeah, I think, for me, the safer insecure world is paramount to what we're doing. We're here for national defense and for our allies and that's really critical to what we're doing. That's what motivates me. That's what gets me up in the morning to know that there is a soldier on the ground who will be using this technology and we will give be giving that person the situational awareness to make the right decisions at the right time. So we can go from small UAVs to larger aircraft or we can do it in a small confined edge device like a stalker UAV. We can scale this up to different products different platforms and they don't even have to be Lockheed Martin >> John: And more devices that are going to be imagined. >> More devices that we haven't even imagined yet. >> Right, that aren't even on the frontier yet. Nick, what's next from your perspective? >> In the domain we are in, next is always plenty of things. Sustainability is a huge domain right now on which we're working. We have lots of things going on in the AI space, stuff going on with Lockheed Martin. We have things going on in the radio network domain. We've been very heavily involved in telecommunication and this is constantly evolving. There is not one domain that, in terms of infrastructure Red Hat is not touching >> Well, this is the first of multiple demonstrations. The scenarios will get more complex with multiple aircraft and in the future, we're also looking at bringing a lot of the 5G work. Lockheed has put a large focus on 5G.mil for military applications and running some of those workloads on top of MicroShift as well is things to come in the future that we are already planning and looking at. >> Yeah, and it's needed in theater to have connectivity. Got to have your own connectivity. >> It's paramount, absolutely. >> Absolutely, it's paramount. It's game-changing. Guys, thank you so much for joining John and me on theCube talking about how Red Hat and Lockheed Martin are working together to leverage AI to really improve decision making and save more lives. It was a wonderful conversation. We're going to have to have you back 'cause we got to follow this. >> Yeah, of course. >> This was great, thank you so much. >> Thank you very much for having us. >> Lisa: Our pleasure, thank you. >> Greg: Really appreciate it. >> Excellent. For our guests and John Furrier, I'm Lisa Martin. You're watching theCUBE Live from KubeCon CloudNativeCon '22 from Detroit. Stick around. Next guest is going to join John and Savannah in just a minute. (lighthearted music)
SUMMARY :
A lot of talk about the of the deaths are because Guys, great to have you on the program. And it's also the contractors to the military What's the secret sauce? And so the ability to and how hard it is to do this. and then redeploy to the edge. on the battlefield. And it gives the end user on the ground that a lot of the deaths My customer's talking about the product. of Kubernetes to transform it So one of the thing we So the ability to spool up in the market? for the AI workload that So again, the ability to manage Talk about the roles you to a low powered UAV. These are the constraints that we have. of the innovation at the edge, as part of the MicroShift And and that's really how you no take over the devices. So the funny thing is that even though in the joint development, the same way you could around the corner with you guys? MicroShift was actually That's the first time we Behind the scenes, we It is, it's exciting. They're barely into the whole. Yeah, so we're working across just the potential to enable the morning to know that that are going to be imagined. More devices that we even on the frontier yet. In the domain we are in, and in the future, we're Got to have your own connectivity. We're going to have to have you back Next guest is going to join John
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Lockheed | ORGANIZATION | 0.99+ |
Savannah | PERSON | 0.99+ |
Greg Forrest | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Nick Barcet | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Detroit | LOCATION | 0.99+ |
Greg | PERSON | 0.99+ |
Lockheed Martin | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
Big Tech | ORGANIZATION | 0.99+ |
16 gig | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Three days | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
two gig | QUANTITY | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one platform | QUANTITY | 0.98+ |
one role | QUANTITY | 0.97+ |
MicroShift | TITLE | 0.97+ |
CloudNativeCon | EVENT | 0.96+ |
first prototype | QUANTITY | 0.96+ |
one domain | QUANTITY | 0.96+ |
KubeCon 2022 | EVENT | 0.95+ |
each edge | QUANTITY | 0.95+ |
Red Hat | ORGANIZATION | 0.95+ |
day one | QUANTITY | 0.95+ |
US | LOCATION | 0.95+ |
Mattis | PERSON | 0.91+ |
General | PERSON | 0.91+ |
Kubernetes | TITLE | 0.9+ |
Slack | ORGANIZATION | 0.88+ |
theCube | ORGANIZATION | 0.84+ |
Deepthi Sigireddi, PlanetScale | KubeCon + CloudNativeCon NA 2022
(upbeat intro music) >> Good afternoon, fellow tech nerds. My name is Savannah Peterson, coming to you from theCube's Remote Studio here in Motown, Detroit, Michigan where we are at KubeCon. John, this is our 12th interview of the day. How are you feeling? >> I'm feeling fresh as the first interview. (Savannah laughs) As always. >> That delivery really implied a level of freshness. >> Let's go! No, this is only Day 1. In three days, reinvent. We go hardcore. These are great events. We get so much great content. The conversations are amazing. The guests are awesome. They're technical, they're smart, and they're making the difference in the future. So, this next segment about Scale MySQL should be awesome. >> I am very excited to introduce our next guest who actually has a Twitter handle that I think most people, at least of my gender in this industry would love to have. She is @ATechGirl. So you can go ahead and tweet her and tell her how great this interview is while we're live. Please welcome Deepthi Sigireddi. Thank you so much for being here with us. >> Thank you for having me. >> You're feeding us in. You've got two talks you're giving while we're here. >> Yes, yes. So tomorrow we will be talking about VTR, myself and one of the other maintainers of Vitess and on Friday we have the Vitess Maintainer Talk. All graduated projects get a maintainer talk. >> Wow, so you are like KubeCon VIP celebrity. >> Well, I hope so. >> Well, you're a maintainer and technical lead, also software engineer at the PlanetScale. But talk about the graduation process where that means to the project and the people involved. >> So Vitess graduated in 2019 and there are strict criteria for graduation and you don't just have to meet the minimum, you sort of have to over perform on the graduation criteria. Some of which are like there must be at least two large production deploys and people from those companies have to go in front of the CNCF committee that approves these things and say that, "Yes, this project is critical to our business." >> A lot of peer review, a lot of deployment success. >> Yes. >> Good consistency in the code. >> Deepthi: Community diversity. >> All that. >> All those things. >> Talk about the importance of this project. What is the top story that people should know about around the project? Why it exists, why it's important, why it's relevant, why it's cool. How would you answer that? >> So MySQL is now 30 years old and yet they are still- >> Makes me feel a little sidebar. (Deepthi laughs) Yeah. >> And yet even though there are many other newer databases, it continues to be used at many of the largest internet scale companies. And some of them, for example, Slack, GitHub, Square, they have grown to a level where they could not have if they had tried to do it with Vanilla MySQL that they started with, and the only reason they are where they are is Vitess. So that is I think the number one thing people should know about Vitess. >> And the origination story on notes say "Came from YouTube." >> Yes. So the way Vitess started was that YouTube was having problems with their MySQL deployment and they got tired of dealing with the site being down. So the founders of Vitess decided that they had to do something about it and they started building Vitess which started as a pretty small, relatively code-based with limited features, and over time they built charting and all of the other things that we have today. >> Well, this is exciting Savannah because we've seen this industry. Like with Facebook, when they started, everyone built their own stuff. MySQL was a great- >> Oh gosh, and everyone wanted to build it their way, reinventing the wheel. >> And MySQL was great. And then as it kind of broke when it grew, it got retrofitted. So, it was constantly being scaled up to the point where now you guys, if I get this right, said, "Hey, we're going to work on this. We're going to make it next-gen." So it's kind of like next-gen MySQL. Almost. >> Yes, yes. I would say that's pretty accurate, yeah. So there are still large companies which run their own MySQL and they have scaled it in their own way, but Vitess happens to be an open source way of scaling MySQL that people can adopt without having to build all of their own tooling around it. >> Speaking of that and growing, you just announced a new version today. >> Yes, yes. >> Tell us about that. >> The focus in this version was to make Vitess easier to use and to deploy. So in the past, there was one glaring gap in Vitess which was that Vitess did not automatically detect and repair MySQL level failures. With this release, we've actually closed that gap. And what that means for people using Vitess is that they will actually spend less time dealing with outages manually, or less human intervention, More automated recovery is what it means. The other thing we've released today is a new web UI. Vitess had a very old web UI, ugly, hard to maintain. Nobody liked it. But it was functional, except we couldn't add anything new to it because it was so old. So, the backend functionality kept advancing but the front end was kind of frozen. Now we have a next generation UI to which in upcoming releases we can add more and more functionality. >> So, it's extensible. They add things in. >> Deepthi: Oh yes, of course. Yeah. >> Awesome. What's the biggest thing that you like about the new situation? Is it more contributors are on board the UI? What's the fresh new impact that's happening in the community? What's getting you excited about with the current project? And the UI's great 'cause usability is important. >> Deepthi: Right. >> Scalability is important. >> I think Vitess solved the scalability problem way early and only now we are really grappling with the usability problem. So the hope and the desire is to make Vitess autopilot so that you reduce human intervention to a minimum once you deploy it. Obviously, you have to go through the process of deploying it. But once you've deployed it, it should just run itself. >> Runs at scale. So, the scale's huge? >> Deepthi: Yes. >> How many contributors are involved in the project? Can you give some numbers? Do you have any handy that you can speak to? >> Right. So, CNCF actually tracks these statistics for all the projects and we consolidated some numbers for the last two full calendar years, 2020 and 2021. We had over 400 contributors and 200 plus of them contributed code and the others contributed documentation issues, website changes, and things like that. So that gives- >> How about downloads? Download's good? >> Oh, okay. So we started publishing the current official Vitess Docker Image in 2018. And by October of 2020, we had about 3.8 million downloads. And by August of 2021, we had 5.2 million. And today, we have had over 10 million downloads- >> Wow! >> Of the main image. >> Starting to see a minute of that hockey stick that we all like to see. Seems like you're very clearly a community-first leader and it seems like that's in the PlanetScale and the test's DNA. Is that how the whole company culture views it? Would you say it's community-first business? >> PlanetScale is very much committed to Vitess as an open source project and to serving the Vitess community. So as part of my role at PlanetScale, some of the things I do are helping new contributors whether they are from PlanetScale or from outside PlanetScale. A number of PlanetScale engineers who don't work full-time on Vitess still contribute bug fixes and features to Vitess. We spend a significant amount of our energy helping users in our community Slack. The releases we do are mainly for the benefit of the community and PlanetScale is making those releases because for Planet Scale... Within PlanetScale, we actually do separate releases versus the public ones. >> One of the things that's coming up here at the show is deploying on Kubernetes. How does that look like? Everyone wants ease of use. Are you guys easy to use? >> Yes, yes. So PlanetScale also open sourced a Kubernetes operator for Vitess that people outside PlanetScale are using to run their production deployments of Vitess. Prior to that, there were Vitess users who actually built their own Kubernetes deployments of Vitess and they are still running those, but new users and new adopters of Vitess tend to use the Kubernetes operator that we are publishing. >> And you guys are the managed service for Vitess for the people that that's the business model for PlanetScale. >> Correct. So PlanetScale has a serverless database on demand which is built on Vitess. So if someone's starting something new and they just need a database, you sign up. It takes 30 seconds to get a database. Connect to it and start doing things with it. Versus if you are a large enterprise and you have a huge database deployment, you can migrate to PlanetScale, import all of your existing data, cut over with minimal downtime and then go, and then PlanetScale manages that. >> And why would they do that? What's the use case for that? Save time new development team or refactoring? >> Save time not being able to hire people with the skills to run it in-house. Not wanting to invest engineering resources in what businesses think is not their core competency. They want to focus on their business value. >> So, this database is a service in their whatever they're doing without adding more costs. >> Right. >> And speed. Okay, cool. How's that going? >> It's going well. >> Any feedback from customers in terms of why that there are any benefit statements you seek popping out? What are the big... What's the big aha when they... When people realize what they have here, what's the aha moment for them? Do they go, "Wow, this is awesome. It's so easy. Push a button. Migrate." Or is it... >> All of those. And people have actually seen cost savings when they've migrated from Amazon RDS to PlanetScale and we have testimonials from people who've said that, "It was so easy to use PlanetScale. Why would we try to do it ourselves?" >> It's the best thing a customer could say, right? We're all about being painkillers and solving some sort of problem. I think that that's a great opportunity to let you show off some of your customers. So, who is receiving this benefit? 'Cause I know PlanetScale specifically is for a certain style of business. >> Hmm. We have a list of customers on the website. >> Savannah: I was going to say you have a really- >> John: She's a software engineer. She's not marketing. >> You did sexy. >> You're doing a great job as much as marketing. >> So the reason I am bringing this up is because it's clear this is a solution for companies like Square, SoundCloud, Etsy, Jordan, and other exciting brands. So when you're talking about companies at scale, these companies are very much at scale, which is awesome. >> Yeah. >> What's next? What do you guys see the future for the project? >> I think we talked about that a little bit already. So, usability is a big thing. We did the new UI. It's not complete, right? Because over the last four years we've built more features into the backend which you can't yet access from the UI. So we want to be able for people to use things like online schema changes which is a big feature of Vitess. Doing schema changes without downtime from the UI. So, schema management from the UI. Vitess has something called VReplication which is the core technology that enables charting. And right now you can from the UI monitor your charting status, but you can't actually start charting from the UI. So more of the administrative functions we want to enable from the UI. >> John: Awesome. >> Last question. What are you personally most excited about this week being here with our wonderful community? >> I always enjoy being at KubeCon. This is my fifth or sixth in-person and I've done a couple of virtual ones. >> Savannah: Awesome. >> Because of the energy, because you get to meet people in person whom previously you've only met in Slack or maybe in a monthly community Zoom calls. We always have people come to our project booth. We have a project booth here for Vitess. People come to the company booth. PlanetScale has a booth. People come to our talks, ask questions. We end up having design discussions, architecture discussions. We get feedback on what is important to the people who show up here. That always informs what we do with the project in future releases. >> Perfect answer. I already mentioned that you can get a hold and in touch with Deepthi through her wonderful Twitter handle. Is there any other website or anything you want to shout out here before I do our close? >> vitess.io. V-I-T-E-S-S dot I-O is the Vitess website and planetscale.com is the PlanetScale website. >> Deepthi Sigireddi, thank you so much for being on the show with us today. John, thanks for keeping me company as always. >> You're welcome. >> And thank all of you for tuning into theCUBE. We will be here in Detroit, Michigan all week live from KubeCon and we hope to see you there. (gentle upbeat music)
SUMMARY :
interview of the day. as the first interview. implied a level of freshness. difference in the future. So you You've got two talks you're myself and one of the Wow, so you are like and the people involved. in front of the CNCF committee A lot of peer review, a What is the top story Yeah. and the only reason they are And the origination story and all of the other Well, this is exciting Savannah reinventing the wheel. to the point where now you guys, and they have scaled it in their own way, Speaking of that and growing, So in the past, there was So, it's extensible. Deepthi: Oh yes, of course. in the community? So the hope and the desire So, the scale's huge? and the others contributed And by August of 2021, we had 5.2 million. and the test's DNA. for the benefit of the community One of the things that's coming up here operator that we are publishing. for the people that and you have a huge database deployment, Save time not being able to hire people So, this database is a service How's that going? What are the big... and we have testimonials It's the best thing a customers on the website. John: She's a software engineer. You're doing a great So the reason I am bringing this up into the backend which you What are you personally and I've done a couple of virtual ones. Because of the energy, that you can get a hold V-I-T-E-S-S dot I-O is the Vitess website for being on the show with us today. and we hope to see you there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Savannah | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Deepthi | PERSON | 0.99+ |
August of 2021 | DATE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
October of 2020 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Etsy | ORGANIZATION | 0.99+ |
5.2 million | QUANTITY | 0.99+ |
Friday | DATE | 0.99+ |
Square | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
fifth | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
sixth | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Deepthi Sigireddi | PERSON | 0.99+ |
SoundCloud | ORGANIZATION | 0.99+ |
Vitess | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
Jordan | ORGANIZATION | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
12th interview | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
today | DATE | 0.99+ |
30 years | QUANTITY | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
over 400 contributors | QUANTITY | 0.99+ |
Slack | ORGANIZATION | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
first interview | QUANTITY | 0.98+ |
KubeCon | EVENT | 0.98+ |
PlanetScale | ORGANIZATION | 0.98+ |
Amazon | ORGANIZATION | 0.98+ |
Day 1 | QUANTITY | 0.97+ |
200 plus | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
Motown, Detroit, Michigan | LOCATION | 0.97+ |
Vitess | TITLE | 0.97+ |
vitess.io | OTHER | 0.96+ |
about 3.8 million downloads | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
three days | QUANTITY | 0.94+ |
over 10 million downloads | QUANTITY | 0.94+ |
Scale MySQL | TITLE | 0.94+ |
Kubernetes | TITLE | 0.93+ |
this week | DATE | 0.93+ |
two talks | QUANTITY | 0.92+ |
ORGANIZATION | 0.91+ | |
Slack | TITLE | 0.9+ |
planetscale.com | OTHER | 0.89+ |
first business | QUANTITY | 0.86+ |
NA 2022 | EVENT | 0.84+ |
Michael Foster & Doron Caspin, Red Hat | KubeCon + CloudNativeCon NA 2022
(upbeat music) >> Hey guys, welcome back to the show floor of KubeCon + CloudNativeCon '22 North America from Detroit, Michigan. Lisa Martin here with John Furrier. This is day one, John at theCUBE's coverage. >> CUBE's coverage. >> theCUBE's coverage of KubeCon. Try saying that five times fast. Day one, we have three wall-to-wall days. We've been talking about Kubernetes, containers, adoption, cloud adoption, app modernization all morning. We can't talk about those things without addressing security. >> Yeah, this segment we're going to hear container and Kubernetes security for modern application 'cause the enterprise are moving there. And this segment with Red Hat's going to be important because they are the leader in the enterprise when it comes to open source in Linux. So this is going to be a very fun segment. >> Very fun segment. Two guests from Red Hat join us. Please welcome Doron Caspin, Senior Principal Product Manager at Red Hat. Michael Foster joins us as well, Principal Product Marketing Manager and StackRox Community Lead at Red Hat. Guys, great to have you on the program. >> Thanks for having us. >> Thank you for having us. >> It's awesome. So Michael StackRox acquisition's been about a year. You got some news? >> Yeah, 18 months. >> Unpack that for us. >> It's been 18 months, yeah. So StackRox in 2017, originally we shifted to be the Kubernetes-native security platform. That was our goal, that was our vision. Red Hat obviously saw a lot of powerful, let's say, mission statement in that, and they bought us in 2021. Pre-acquisition we were looking to create a cloud service. Originally we ran on Kubernetes platforms, we had an operator and things like that. Now we are looking to basically bring customers in into our service preview for ACS as a cloud service. That's very exciting. Security conversation is top notch right now. It's an all time high. You can't go with anywhere without talking about security. And specifically in the code, we were talking before we came on camera, the software supply chain is real. It's not just about verification. Where do you guys see the challenges right now? Containers having, even scanning them is not good enough. First of all, you got to scan them and that may not be good enough. Where's the security challenges and where's the opportunity? >> I think a little bit of it is a new way of thinking. The speed of security is actually does make you secure. We want to keep our images up and fresh and updated and we also want to make sure that we're keeping the open source and the different images that we're bringing in secure. Doron, I know you have some things to say about that too. He's been working tirelessly on the cloud service. >> Yeah, I think that one thing, you need to trust your sources. Even if in the open source world, you don't want to copy paste libraries from the web. And most of our customers using third party vendors and getting images from different location, we need to trust our sources and we have a really good, even if you have really good scanning solution, you not always can trust it. You need to have a good solution for that. >> And you guys are having news, you're announcing the Red Hat Advanced Cluster Security Cloud Service. >> Yes. >> What is that? >> So we took StackRox and we took the opportunity to make it as a cloud services so customer can consume the product as a cloud services as a start offering and customer can buy it through for Amazon Marketplace and in the future Azure Marketplace. So customer can use it for the AKS and EKS and AKS and also of course OpenShift. So we are not specifically for OpenShift. We're not just OpenShift. We also provide support for EKS and AKS. So we provided the capability to secure the whole cloud posture. We know customer are not only OpenShift or not only EKS. We have both. We have free cloud or full cloud. So we have open. >> So it's not just OpenShift, it's Kubernetes, environments, all together. >> Doron: All together, yeah. >> Lisa: Meeting customers where they are. >> Yeah, exactly. And we focus on, we are not trying to boil the ocean or solve the whole cloud security posture. We try to solve the Kubernetes security cluster. It's very unique and very need unique solution for that. It's not just added value in our cloud security solution. We think it's something special for Kubernetes and this is what Red that is aiming to. To solve this issue. >> And the ACS platform really doesn't change at all. It's just how they're consuming it. It's a lot quicker in the cloud. Time to value is right there. As soon as you start up a Kubernetes cluster, you can get started with ACS cloud service and get going really quickly. >> I'm going to ask you guys a very simple question, but I heard it in the bar in the lobby last night. Practitioners talking and they were excited about the Red Hat opportunity. They actually asked a question, where do I go and get some free Red Hat to test some Kubernetes out and run helm or whatever. They want to play around. And do you guys have a program for someone to get start for free? >> Yeah, so the cloud service specifically, we're going to service preview. So if people sign up, they'll be able to test it out and give us feedback. That's what we're looking for. >> John: Is that a Sandbox or is that going to be in the cloud? >> They can run it in their own environment. So they can sign up. >> John: Free. >> Doron: Yeah, free. >> For the service preview. All we're asking for is for customer feedback. And I know it's actually getting busy there. It's starting December. So the quicker people are, the better. >> So my friend at the lobby I was talking to, I told you it was free. I gave you the sandbox, but check out your cloud too. >> And we also have the open source version so you can download it and use it. >> Yeah, people want to know how to get involved. I'm getting a lot more folks coming to Red Hat from the open source side that want to get their feet wet. That's been a lot of people rarely interested. That's a real testament to the product leadership. Congratulations. >> Yeah, thank you. >> So what are the key challenges that you have on your roadmap right now? You got the products out there, what's the current stake? Can you scope the adoption? Can you share where we're at? What people are doing specifically and the real challenges? >> I think one of the biggest challenges is talking with customers with a slightly, I don't want to say outdated, but an older approach to security. You hear things like malware pop up and it's like, well, really what we should be doing is keeping things into low and medium vulnerabilities, looking at the configuration, managing risk accordingly. Having disparate security tools or different teams doing various things, it's really hard to get a security picture of what's going on in the cluster. That's some of the biggest challenges that we talk with customers about. >> And in terms of resolving those challenges, you mentioned malware, we talk about ransomware. It's a household word these days. It's no longer, are we going to get hit? It's when? It's what's the severity? It's how often? How are you guys helping customers to dial down some of the risk that's inherent and only growing these days? >> Yeah, risk, it's a tough word to generalize, but our whole goal is to give you as much security information in a way that's consumable so that you can evaluate your risk, set policies, and then enforce them early on in the cluster or early on in the development pipeline so that your developers get the security information they need, hopefully asynchronously. That's the best way to do it. It's nice and quick, but yeah. I don't know if Doron you want to add to that? >> Yeah, so I think, yeah, we know that ransomware, again, it's a big world for everyone and we understand the area of the boundaries where we want to, what we want to protect. And we think it's about policies and where we enforce it. So, and if you can enforce it on, we know that as we discussed before that you can scan the image, but we never know what is in it until you really run it. So one of the thing that we we provide is runtime scanning. So you can scan and you can have policy in runtime. So enforce things in runtime. But even if one image got in a way and get to your cluster and run on somewhere, we can stop it in runtime. >> Yeah. And even with the runtime enforcement, the biggest thing we have to educate customers on is that's the last-ditch effort. We want to get these security controls as early as possible. That's where the value's going to be. So we don't want to be blocking things from getting to staging six weeks after developers have been working on a project. >> I want to get you guys thoughts on developer productivity. Had Docker CEO on earlier and since then I had a couple people messaging me. Love the vision of Docker, but Docker Hub has some legacy and it might not, has does something kind of adoption that some people think it does. Are people moving 'cause there times they want to have these their own places? No one place or maybe there is, or how do you guys see the movement of say Docker Hub to just using containers? I don't need to be Docker Hub. What's the vis-a-vis competition? >> I mean working with open source with Red Hat, you have to meet the developers where they are. If your tool isn't cutting it for developers, they're going to find a new tool and really they're the engine, the growth engine of a lot of these technologies. So again, if Docker, I don't want to speak about Docker or what they're doing specifically, but I know that they pretty much kicked off the container revolution and got this whole thing started. >> A lot of people are using your environment too. We're hearing a lot of uptake on the Red Hat side too. So, this is open source help, it all sorts stuff out in the end, like you said, but you guys are getting a lot of traction there. Can you share what's happening there? >> I think one of the biggest things from a developer experience that I've seen is the universal base image that people are using. I can speak from a security standpoint, it's awesome that you have a base image where you can make one change or one issue and it can impact a lot of different applications. That's one of the big benefits that I see in adoption. >> What are some of the business, I'm curious what some of the business outcomes are. You talked about faster time to value obviously being able to get security shifted left and from a control perspective. but what are some of the, if I'm a business, if I'm a telco or a healthcare organization or a financial organization, what are some of the top line benefits that this can bubble up to impact? >> I mean for me, with those two providers, compliance is a massive one. And just having an overall look at what's going on in your clusters, in your environments so that when audit time comes, you're prepared. You can get through that extremely quickly. And then as well, when something inevitably does happen, you can get a good image of all of like, let's say a Log4Shell happens, you know exactly what clusters are affected. The triage time is a lot quicker. Developers can get back to developing and then yeah, you can get through it. >> One thing that we see that customers compliance is huge. >> Yes. And we don't want to, the old way was that, okay, I will provision a cluster and I will do scans and find things, but I need to do for PCI DSS for example. Today the customer want to provision in advance a PCI DSS cluster. So you need to do the compliance before you provision the cluster and make all the configuration already baked for PCI DSS or HIPAA compliance or FedRAMP. And this is where we try to use our compliance, we have tools for compliance today on OpenShift and other clusters and other distribution, but you can do this in advance before you even provision the cluster. And we also have tools to enforce it after that, after your provision, but you have to do it again before and after to make it more feasible. >> Advanced cluster management and the compliance operator really help with that. That's why OpenShift Platform Plus as a bundle is so popular. Just being able to know that when a cluster gets provision, it's going to be in compliance with whatever the healthcare provider is using. And then you can automatically have ACS as well pop up so you know exactly what applications are running, you know it's in compliance. I mean that's the speed. >> You mentioned the word operator, I get triggering word now for me because operator role is changing significantly on this next wave coming because of the automation. They're operating, but they're also devs too. They're developing and composing. It's almost like a dashboard, Lego blocks. The operator's not just manually racking and stacking like the old days, I'm oversimplifying it, but the new operators running stuff, they got observability, they got coding, their servicing policy. There's a lot going on. There's a lot of knobs. Is it going to get simpler? How do you guys see the org structures changing to fill the gap on what should be a very simple, turn some knobs, operate at scale? >> Well, when StackRox originally got acquired, one of the first things we did was put ACS into an operator and it actually made the application life cycle so much easier. It was very easy in the console to go and say, Hey yeah, I want ACS my cluster, click it. It would get provisioned. New clusters would get provisioned automatically. So underneath it might get more complicated. But in terms of the application lifecycle, operators make things so much easier. >> And of course I saw, I was lucky enough with Lisa to see Project Wisdom in AnsibleFest. You going to say, Hey, Red Hat, spin up the clusters and just magically will be voice activated. Starting to see AI come in. So again, operations operator is got to dev vibe and an SRE vibe, but it's not that direct. Something's happening there. We're trying to put our finger on. What do you guys think is happening? What's the real? What's the action? What's transforming? >> That's a good question. I think in general, things just move to the developers all the time. I mean, we talk about shift left security, everything's always going that way. Developers how they're handing everything. I'm not sure exactly. Doron, do you have any thoughts on that. >> Doron, what's your reaction? You can just, it's okay, say what you want. >> So I spoke with one of our customers yesterday and they say that in the last years, we developed tons of code just to operate their infrastructure. That if developers, so five or six years ago when a developer wanted VM, it will take him a week to get a VM because they need all their approval and someone need to actually provision this VM on VMware. And today they automate all the way end-to-end and it take two minutes to get a VM for developer. So operators are becoming developers as you said, and they develop code and they make the infrastructure as code and infrastructure as operator to make it more easy for the business to run. >> And then also if you add in DataOps, AIOps, DataOps, Security Ops, that's the new IT. It seems to be the new IT is the stuff that's scaling, a lot of data's coming in, you got security. So all that's got to be brought in. How do you guys view that into the equation? >> Oh, I mean you become big generalists. I think there's a reason why those cloud security or cloud professional certificates are becoming so popular. You have to know a lot about all the different applications, be able to code it, automate it, like you said, hopefully everything as code. And then it also makes it easy for security tools to come in and look and examine where the vulnerabilities are when those things are as code. So because you're going and developing all this automation, you do become, let's say a generalist. >> We've been hearing on theCUBE here and we've been hearing the industry, burnout, associated with security professionals and some DataOps because the tsunami of data, tsunami of breaches, a lot of engineers getting called in the middle of the night. So that's not automated. So this got to get solved quickly, scaled up quickly. >> Yes. There's two part question there. I think in terms of the burnout aspect, you better send some love to your security team because they only get called when things get broken and when they're doing a great job you never hear about them. So I think that's one of the things, it's a thankless profession. From the second part, if you have the right tools in place so that when something does hit the fan and does break, then you can make an automated or a specific decision upstream to change that, then things become easy. It's when the tools aren't in place and you have desperate environments so that when a Log4Shell or something like that comes in, you're scrambling trying to figure out what clusters are where and where you're impacted. >> Point of attack, remediate fast. That seems to be the new move. >> Yeah. And you do need to know exactly what's going on in your clusters and how to remediate it quickly, how to get the most impact with one change. >> And that makes sense. The service area is expanding. More things are being pushed. So things will, whether it's a zero day vulnerability or just attack. >> Just mix, yeah. Customer automate their all of things, but it's good and bad. Some customer told us they, I think Spotify lost the whole a full zone because of one mistake of a customer because they automate everything and you make one mistake. >> It scale the failure really. >> Exactly. Scaled the failure really fast. >> That was actually few contact I think four years ago. They talked about it. It was a great learning experience. >> It worked double edge sword there. >> Yeah. So definitely we need to, again, scale automation, test automation way too, you need to hold the drills around data. >> Yeah, you have to know the impact. There's a lot of talk in the security space about what you can and can't automate. And by default when you install ACS, everything is non-enforced. You have to have an admission control. >> How are you guys seeing your customers? Obviously Red Hat's got a great customer base. How are they adopting to the managed service wave that's coming? People are liking the managed services now because they maybe have skills gap issues. So managed service is becoming a big part of the portfolio. What's your guys' take on the managed services piece? >> It's just time to value. You're developing a new application, you need to get it out there quick. If somebody, your competitor gets out there a month before you do, that's a huge market advantage. >> So you care how you got there. >> Exactly. And so we've had so much Kubernetes expertise over the last 10 or so, 10 plus year or well, Kubernetes for seven plus years at Red Hat, that why wouldn't you leverage that knowledge internally so you can get your application. >> Why change your toolchain and your workflows go faster and take advantage of the managed service because it's just about getting from point A to point B. >> Exactly. >> Well, in time to value is, you mentioned that it's not a trivial term, it's not a marketing term. There's a lot of impact that can be made. Organizations that can move faster, that can iterate faster, develop what their customers are looking for so that they have that competitive advantage. It's definitely not something that's trivial. >> Yeah. And working in marketing, whenever you get that new feature out and I can go and chat about it online, it's always awesome. You always get customers interests. >> Pushing new code, being secure. What's next for you guys? What's on the agenda? What's around the corner? We'll see a lot of Red Hat at re:Invent. Obviously your relationship with AWS as strong as a company. Multi-cloud is here. Supercloud as we've been saying. Supercloud is a thing. What's next for you guys? >> So we launch the cloud services and the idea that we will get feedback from customers. We are not going GA. We're not going to sell it for now. We want to get customers, we want to get feedback to make the product as best what we can sell and best we can give for our customers and get feedback. And when we go GA and we start selling this product, we will get the best product in the market. So this is our goal. We want to get the customer in the loop and get as much as feedback as we can. And also we working very closely with our customers, our existing customers to announce the product to add more and more features what the customer needs. It's all about supply chain. I don't like it, but we have to say, it's all about making things more automated and make things more easy for our customer to use to have security in the Kubernetes environment. >> So where can your customers go? Clearly, you've made a big impact on our viewers with your conversation today. Where are they going to be able to go to get their hands on the release? >> So you can find it on online. We have a website to sign up for this program. It's on my blog. We have a blog out there for ACS cloud services. You can just go there, sign up, and we will contact the customer. >> Yeah. And there's another way, if you ever want to get your hands on it and you can do it for free, Open Source StackRox. The product is open source completely. And I would love feedback in Slack channel. It's one of the, we also get a ton of feedback from people who aren't actually paying customers and they contribute upstream. So that's an awesome way to get started. But like you said, you go to, if you search ACS cloud service and service preview. Don't have to be a Red Hat customer. Just if you're running a CNCF compliant Kubernetes version. we'd love to hear from you. >> All open source, all out in the open. >> Yep. >> Getting it available to the customers, the non-customers, they hopefully pending customers. Guys, thank you so much for joining John and me talking about the new release, the evolution of StackRox in the last season of 18 months. Lot of good stuff here. I think you've done a great job of getting the audience excited about what you're releasing. Thank you for your time. >> Thank you. >> Thank you. >> For our guest and for John Furrier, Lisa Martin here in Detroit, KubeCon + CloudNativeCon North America. Coming to you live, we'll be back with our next guest in just a minute. (gentle music)
SUMMARY :
back to the show floor Day one, we have three wall-to-wall days. So this is going to be a very fun segment. Guys, great to have you on the program. So Michael StackRox And specifically in the code, Doron, I know you have some Even if in the open source world, And you guys are having and in the future Azure Marketplace. So it's not just OpenShift, or solve the whole cloud security posture. It's a lot quicker in the cloud. I'm going to ask you Yeah, so the cloud So they can sign up. So the quicker people are, the better. So my friend at the so you can download it and use it. from the open source side that That's some of the biggest challenges How are you guys helping so that you can evaluate So one of the thing that we we the biggest thing we have I want to get you guys thoughts you have to meet the the end, like you said, it's awesome that you have a base image What are some of the business, and then yeah, you can get through it. One thing that we see that and make all the configuration and the compliance operator because of the automation. and it actually made the What do you guys think is happening? Doron, do you have any thoughts on that. okay, say what you want. for the business to run. So all that's got to be brought in. You have to know a lot about So this got to get solved and you have desperate environments That seems to be the new move. and how to remediate it quickly, And that makes sense. and you make one mistake. Scaled the contact I think four years ago. you need to hold the drills around data. And by default when you install ACS, How are you guys seeing your customers? It's just time to value. so you can get your application. and take advantage of the managed service Well, in time to value is, whenever you get that new feature out What's on the agenda? and the idea that we will Where are they going to be able to go So you can find it on online. and you can do it for job of getting the audience Coming to you live,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Michael Foster | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Doron | PERSON | 0.99+ |
Doron Caspin | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
December | DATE | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two minutes | QUANTITY | 0.99+ |
seven plus years | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
five | DATE | 0.99+ |
one mistake | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
a week | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two providers | QUANTITY | 0.99+ |
Two guests | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Michael | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
four years ago | DATE | 0.98+ |
five times | QUANTITY | 0.98+ |
one issue | QUANTITY | 0.98+ |
six years ago | DATE | 0.98+ |
zero day | QUANTITY | 0.98+ |
six weeks | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
OpenShift | TITLE | 0.98+ |
last night | DATE | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
one image | QUANTITY | 0.97+ |
last years | DATE | 0.97+ |
First | QUANTITY | 0.97+ |
Azure Marketplace | TITLE | 0.97+ |
One thing | QUANTITY | 0.97+ |
telco | ORGANIZATION | 0.97+ |
Day one | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
Docker Hub | TITLE | 0.96+ |
Docker Hub | ORGANIZATION | 0.96+ |
10 plus year | QUANTITY | 0.96+ |
Doron | ORGANIZATION | 0.96+ |
Project Wisdom | TITLE | 0.96+ |
day one | QUANTITY | 0.95+ |
Lego | ORGANIZATION | 0.95+ |
one change | QUANTITY | 0.95+ |
a minute | QUANTITY | 0.95+ |
ACS | TITLE | 0.95+ |
CloudNativeCon '22 | EVENT | 0.94+ |
Kubernetes | TITLE | 0.94+ |
Shaked Askayo & Amit Eyal Govrin, Kubiya | KubeCon+CloudNativeCon NA 2022
>> Good afternoon everyone, and welcome back to theCUBE where we're coming to you live from Detroit, Michigan at KubeCon and Cloud Native Con. We're going to keep theCUBE puns coming this afternoon because we have the pleasure of being joined by not one but two guests from Kubiya. John Furrier, my wonderful co-host. You're familiar with these guys. You just chatted with them last week. >> We broke the story of their launch and featured them on theCUBE in our studio conversation. This is a great segment. Real innovative company with lofty goals, and they're really good ones. Looking forward to it. >> If that's not a tease to keep watching I don't know what is. (John laughs) Without further ado, on that note, allow me to introduce Amit and Shaked who are here to tell us all about Kubiya. And I'm going to blow the pitch for you a little bit just because this gets me excited. (all laugh) They're essentially the Siri of DevOps, but that means you can, you can create using voice or chat or any medium. Am I right? Is this? Yeah? >> You're hired. >> Excellent. (all laugh) >> Okay. >> We'll take it. >> Who knows what I'll tell the chat to do or what I'll, what I will control with my voice, but I love where you're. >> Absolutely. I'll just give the high level. Conversational AI for the world of DevOps. Kind of redefining how self-service DevOps is supposed to be essentially accessed, right? As opposed to just having siloed information. You know, having different platforms that require an operator or somebody who's using it to know exactly how they're accessing what they're doing and so forth. Essentially, the ability to express your intent in natural language, English, or any language I use. >> It's quite literally the language barrier sometimes. >> Precisely. >> Both from the spoken as well as code language. And it sounds like you're eliminating that as an obstacle. >> We're essentially saying, turn simple, complex cast into simple conversations. That's, that's really what we're saying here. >> So let's get into the launch. You just launched a fresh startup. >> Yeah, yeah, yeah. >> Yeah. >> So you guys are going to take on the world. Lofty goals if that. I had the briefing. Where's the origination story come from? What, how did you guys get here? Was it a problem that you saw, you were experiencing, an itch you were scratching? What was the motivation and what's the origination story? >> Shaked: So. >> Amit: Okay, go first please. >> Essentially everything started with my experience as being an operator. I used to be a DevOps engineer for a few years for a large (indistinct) company. On later stages I even managed an SRE team. So all of these access requires Q and A staff is something that I experience nonstop on Slack or Teams, all of these communication channels. And usually I find out that everything happens from the chat. So essentially back then I created a chat bot. I connect this chat bot to the different organizational tools, and instead of the developers approaching to me or the team using the on call channel or directly they will just approach the bot. But essentially the bot was very naive, and they still needed to know what they, they want to do inside the bot. But it's still managed to solve 70% of the complexity and the toil on us as a team so we could focus on innovation. So Kubiya's a more advanced version of it. Basically with Kubiya you can define what we call workflows, and we convert all of these complexity of access request into simple conversations that the end users, which could be developers, but not only, are having with a DevOps team. So that's essentially how it works, and we're very excited about it. >> So you were up all night answering the same question over and over again. (all laugh) And you said, Hey, screw it. I'm going to just create a bot, bot myself up. (Shaked laughs) But it gets at something important. I mean, I'm not just joking. It probably happened, right? That was probably the case. You were up all night telling. >> Yeah, I mean it was usually stuff that we didn't need to maintain. It was training requests and questions that just keep on repeating themselves. And actually we were in Israel, but we sell three different time zones of developers. So all of these developers, as soon as the day finishes in Israel, the day in the US started. So they will approach us from the US. So we didn't really sleep. (all laugh) As with these requests non-stop. >> It's that 24 hour. >> Yeah, yeah. 24 hours for a single team. >> So the world clock global (indistinct) catches you a little sometimes. Yeah. >> Yeah, exactly. >> So you basically take all the things that you know that are common and then make a chat bot answering as if you're you. But this brings up the whole question of chat bot utilization. There's been a lot of debate in the AI circles that chat bots really haven't made it. They're not, they haven't been good enough. So 'cause NLP and other trivial, >> Amit: Sure. or things that haven't really clicked. What's different now? How do you guys see your approach cracking the code to go that kind of reasoning level? Bots can reason? Now we're in business. >> Yeah. Most of the chat bots are general purpose, right? We're coming with the domain expertise. We know the pain from the inside. We know how the operators want to define such conversations that users might have with the virtual assistant. So we combined all of the technical tools that are needed in order to get it going. So we have a a DSL, domain specific language, where the operators can define these easy conversations and combine all of the different organizational tools which can be done using DSDK. Besides this fact, we have a no code, for less technical people to create such workflows even with no code interface. And we have a CLI, which you could use to leverage the power of the virtual assist even right from your terminal. So that's how I see the domain expertise coming in that we have different communication channels for everyone that needs to be inside the loop. >> That's awesome. >> And I, and I can add to that. So that's one element, which is the domain expertise. The other one is really our huge differentiator, the ability to let the end users influence the system itself. So essentially. >> John: Like how? Give me an example. >> Sure. We call it teach me feature, but essentially if you have any type of a request and the system hasn't created an automation or hasn't, doesn't recognize it, you can go ahead and bind that into your intent and next time, and you can define the scope for yourself only, for the team, or even for the entire organization that actually has to have permission to access the request and control and so on. >> Savannah: That's something. Yeah, I love that as a knowledge base. I mean a custom tool kit. >> Absolutely. >> And I like that you just said for the individual. So let's say I have some crazy workflows that I don't need anybody else to know about. >> 100 percent. >> I can customize my experience. >> Mm hmm. >> Do you see your, this is really interesting, and I'm, it's surprising to me we haven't seen a lot of players in this space before because what you're doing makes a lot of sense to me, especially as someone who is less technical. >> Yeah. >> Do you view yourselves as a gateway tool for more folks to be involved in more complex technology? >> So, so I'll take that. It's not that we haven't seen advanced virtual assistants. They've existed in different worlds. >> Savannah: Right. >> Up until now they've existed more in CRM tools. >> Savannah: Right. >> Call centers, right? >> Shaked: Yeah. >> You go on to Ralph Lauren, Calvin Klein, you go and chat with. Now imagine you can bring that into a world of dev tools that has high domain expertise, high technical amplitude, and now you can go and combine the domain expertise with the accessibility of conversational AI. That's, that's a unique feature here. >> What's the biggest thing that's surprised you with the launch so far? The reaction to the name, Kubiya, which is Cube in Hebrew. >> Amit: Yes. >> Apparently. >> Savannah: Which I love. >> Which by the way, you know, we have a TM and R on our Cube. (all laugh) So we can talk, you know, license rights. >> Let's drop the trademark rules today, John, here. We're here to share information. Confuse the audience. Sorry about that, by the way. (all laugh) >> We're an open source, inclusive culture. We'll let it slide this time. >> The KubeCon, theCUBE, and Kubiya. (John laughs) In the Hebrew we have this saying, third time we all have ice cream. So. (all laugh) >> I think there's some ice cream over there actually. >> There is. >> Yeah, yeah. There you go. >> All kidding aside, all fun. What's, what's been the reaction? Got some press coverage. We had the launch. You guys launched with theCUBE in here, big reception. What's been the common feedback? >> And really, I think I expected this, but I didn't expect this much. Really, the fact that people really believe in our thesis, really expect great things from us, right? We've starting to working with. >> Savannah: Now the pressure's on. >> Rolling out dozens of POCs, but even that requires obviously a lot of attention to the detail, which we're rolling out. This is effectively what we're seeing. People love the fact that you have a unique and fresh way to approaching the self-service which really has been stalled for a while. And we've recognized that. I think our thesis is where we. >> Okay, so as a startup you have lofty goals, you have investors now. >> Amit: Yeah. >> Congratulations. >> Amit: Thank you. >> They're going to want to keep the traction going, but as a north star, what's your, what are you going to, what are you going to take? What territory are you going to take? Is it new territory? Are you eating someone's lunch? Who are you going to be competing with? What's the target? What's the, what's the? >> Sure, sure. >> I'm sure you guys have it. Who are you takin' over? >> I think the gateway, the entry point to every organization is a bottleneck. You solve the hard problem first. That's where you can go into other directions, and you can imagine where other operational workflows and pains that we can help solve once we have essentially the DevOps. >> John: So you see this as greenfield, new opportunity? >> I believe so. >> Is there any incumbent you see out there? An old stodgy? >> Today we're on internal developer platform service catalog type of, you know, use cases. >> John: Yeah. >> But that's kind of where we can grow from there and have the ecosystem essentially embrace us. >> John: How about the technology platform? >> Amit: Yeah. >> What's the vision for the innovation? >> Essentially want to be able to integrate with all of the different cloud providers, cloud solutions, SaaS platforms, and take the atlas approach that they were using right to the chats from everywhere to anywhere. So essentially we want in the end that users will be able to do anything that they need inside all of these complicated platforms, which some of them are totally complicated, with plain English. >> So what's the biggest challenge for you then on that front leading the technology side of the team? >> So I would say that the conversational AI part is truly complicated because it requires to extract many types of intentions from different types of users and also integrate with so many tools and solutions. >> Savannah: Yeah. So it requires a lot of thinking, a lot of architecture, but we are doing it just fine. >> Awesome. What do you guys think about KubeCon this week? What's, what's the top story that you see emerging out of this? Just generally as an industry observer, what's the most important? >> Savannah: Maybe it's them. Announcement halo. >> What's the cover story that you see? (all laugh) I mean you guys are in the innovation intent-based infrastructure. I get that. >> So obviously everyone's looking to diversify their engineering, diversify their platforms to make sure they're as decoupled from the main CSPs as possible. So being able to build their own, and we're really helping enable a lot of that in there. We're really helping improve upon that open source together with managed platforms can really play a very nice game together. So. >> Awesome. So are you guys hiring, recruiting? Tell us about the team DNA. Now you're in Tel Aviv. You're in the bay. >> Shaked: Check our openings on LinkedIn. (all laugh) >> We have a dozen job postings on our website. Obviously engineering and sales then go to market. >> So when theCUBE comes to Tel Aviv, and we have a location there. >> Yeah. >> Will you be, share some space? >> Savannah: Is this our Tel Aviv office happening right now? I love this. >> Amit: We will be hosting you. >> John: theCube with a C and Kube with a K over there. >> Yeah. >> All one happy family. >> Would love that. >> Get some ice cream. >> Would love that. >> All right, so last question for you all. You just had a very big exciting announcement. It's a bit of a coming out party for you. What do you hope to be able to say in a year that you can't currently say right now? When you join us on theCUBE next time? >> No, no, it's absolutely. I think our thesis that you can turn conversations into operations. It's, it sounds obvious to you when you think about it, but it's not trivial when you look into the workflows, into the operations. The fact that we can actually go a year from today and say we got hundreds of customers, happy customers who've proven the thesis or sharing knowledge between themselves, that would be euphoric for us. >> All right. >> You really are about helping people. >> Absolutely. >> It doesn't seem like it's a lip service from both of you. >> No. (all laugh) >> Is there going to be levels of bot, like level one bot level two, level three, and then finally the SRE gets on the phone? Is that like some point? >> Is there going to be bot singularity? Is that, is that what we're exploring right now? (overlapping chatter) >> Some kind of escalation bot. >> Enlightenment with bots. >> We actually planning a feature we want to call a handoff where a human in the loop is required, which often is needed. Machine cannot do it alone. We'll just. >> Yeah, I think it makes total sense for geos, ops at the same. >> Shaked: Yeah. >> But not exactly the same. Really good, good solution. I love the direction. Congratulations on the launch. >> Shaked: Thank you so much. >> Amit: Thank you very much. >> Yeah, that's very exciting. You can obviously look, check out that news on Silicon Angle since we had the pleasure of breaking it. >> Absolutely. >> If people would like to say hi, stalk you on the internet, where's the best place for them to do that? >> Be on our Twitter and LinkedIn handles of course. So we have kubiya.ai. We also have a free trial until the end of the year, and we also have free forever tier, that people can sign up, play, and come say hi. I mean, we'd love to chat. >> I love it. Well, Amit, Shaked, thank you so much for being with us. >> Shaked: Thank you so much. >> John, thanks for sitting to my left for the entire day. I sincerely appreciate it. >> Just glad I can help out. >> And thank you all for tuning in to this wonderful edition of theCUBE Live from Detroit at KubeCon. Who knows what my voice will be controlling next, but either way, I hope you are there to find out. >> Amit: Love it.
SUMMARY :
where we're coming to you We broke the story of their launch but that means you can, (all laugh) or what I'll, what I will Conversational AI for the world of DevOps. It's quite literally the Both from the spoken what we're saying here. So let's get into the launch. Was it a problem that you and instead of the So you were up all night as soon as the day finishes in Israel, Yeah, yeah. So the world clock global (indistinct) that you know that are common cracking the code to go that And we have a CLI, which you the ability to let the end users John: Like how? and the system hasn't Yeah, I love that as a knowledge base. And I like that you just and I'm, it's surprising to me It's not that we haven't seen existed more in CRM tools. and now you can go and What's the biggest Which by the way, you know, about that, by the way. We'll let it slide this time. In the Hebrew we have this saying, I think there's some ice There you go. We had the launch. Really, the fact that people that you have a unique you have lofty goals, I'm sure you guys have it. and you can imagine where of, you know, use cases. and have the ecosystem and take the atlas approach the conversational AI part So it requires a lot of thinking, that you see emerging out of this? Savannah: Maybe it's What's the cover story that you see? So being able to build their own, So are you (all laugh) then go to market. and we have a location there. I love this. and Kube with a K over there. that you can't currently say right now? that you can turn lip service from both of you. feature we want to call a handoff ops at the same. I love the direction. the pleasure of breaking it. So we have kubiya.ai. Well, Amit, Shaked, thank you to my left for the entire day. And thank you all for tuning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Amit | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Israel | LOCATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Shaked | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
100 percent | QUANTITY | 0.99+ |
Shaked Askayo | PERSON | 0.99+ |
KubeCon | EVENT | 0.99+ |
Calvin Klein | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Siri | TITLE | 0.99+ |
24 hour | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
Detroit | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Ralph Lauren | ORGANIZATION | 0.99+ |
third time | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
hours | QUANTITY | 0.98+ |
hundreds of customers | QUANTITY | 0.98+ |
one element | QUANTITY | 0.98+ |
English | OTHER | 0.98+ |
Both | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
Today | DATE | 0.98+ |
DevOps | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
Hebrew | OTHER | 0.97+ |
Amit Eyal Govrin | PERSON | 0.97+ |
a year | QUANTITY | 0.96+ |
CloudNativeCon | EVENT | 0.95+ |
Kubiya | PERSON | 0.95+ |
DSDK | TITLE | 0.95+ |
first | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
kubiya.ai | OTHER | 0.93+ |
three different time zones | QUANTITY | 0.92+ |
this afternoon | DATE | 0.92+ |
dozens of POCs | QUANTITY | 0.91+ |
Kubiya | ORGANIZATION | 0.9+ |
Detroit, Michigan | LOCATION | 0.88+ |
single team | QUANTITY | 0.88+ |
this week | DATE | 0.87+ |
Cloud Native Con. | EVENT | 0.84+ |
NA 2022 | EVENT | 0.79+ |
both of | QUANTITY | 0.78+ |
Daniel Rethmeier & Samir Kadoo | Accelerating Business Transformation
(upbeat music) >> Hi everyone. Welcome to theCUBE special presentation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got two great guests, one for calling in from Germany, or videoing in from Germany, one from Maryland. We've got VMware and AWS. This is the customer successes with VMware Cloud on AWS Showcase: Accelerating Business Transformation. Here in the Showcase at Samir Kadoo, worldwide VMware strategic alliance solution architect leader with AWS. Samir, great to have you. And Daniel Rethmeier, principal architect global AWS synergy at VMware. Guys, you guys are working together, you're the key players in this relationship as it rolls out and continues to grow. So welcome to theCUBE. >> Thank you, greatly appreciate it. >> Great to have you guys both on. As you know, we've been covering this since 2016 when Pat Gelsinger, then CEO, and then then CEO AWS at Andy Jassy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success of VM workloads in the cloud. VMware's had great success with it since and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later, we got this whole inflection point coming, you're starting to see this idea of higher level services, more performance are coming in at the infrastructure side, more automation, more serverless, I mean and AI. I mean, it's just getting better and better every year in the cloud. Kind of a whole 'nother level. Where are we? Samir, let's start with you on the relationship. >> Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced. And then less than a year later, that's when we officially launched VMware Cloud on AWS. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware. You know, one of the key things... Together, day in, day out, as far as advancing VMware Cloud on AWS. You know, even if you look at the innovation that takes place with the solution, things have modernized, things have changed, there's been advancements. You know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right, more recently. One of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware Cloud on AWS. And even with VMware to other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware Cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware Cloud on AWS service competency. So think about it from the standpoint, there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >> Great stuff. Daniel, I want to get to you in a second upon this principal architect position you have. In your title, you're the global AWS synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly VMworld, talking about how the workloads on IT has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AIOps, you got ITOps changing a lot, you got a lot more automation, edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the relationship? >> So at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware Cloud and AWS, we are also enabling us mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembles globally and also virtually on Slack and the usual suspect tools working together and listening to customers. That's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the best benefits out of VMware Cloud on AWS. And over the time, we really have involved the solution. As Samir mentioned, we just added additional storage solutions to VMware Cloud on AWS. We now have three different instance types that cover a broad range of workloads. So for example, we just edited the I4i host, which is ideally for workloads that require a lot of CPU power, such as, you mentioned it, AI workloads. >> Yeah, so I want to get us just specifically on the customer journey and their transformation, you know, we've been reporting on Silicon angle in theCUBE in the past couple weeks in a big way that the ops teams are now the new devs, right? I mean that sounds a little bit weird, but IT operations is now part of a lot more DataOps, security, writing code, composing. You know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing, what are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >> That's a great point, because originally, VMware and AWS came from very different directions when it comes to speaking people and customers. So for example, AWS, very developer focused, whereas VMware has a very great footprint in the ITOps area. And usually these are very different teams, groups, different cultures, but it's getting together. However, we always try to address the customer needs, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, "Well, we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service. Recoverability as a service, scalability as a service. We want to have this from the infrastructure." That was one of the unique selling points for VMware on-premise and now we are bringing this into the cloud. >> Samir, talk about your perspective. I want to get your thoughts, and not to take a tangent, but we had covered the AWS re:MARS, actually it was Amazon re:MARS, machine learning automation, robotics and space was really kind of the confluence of industrial IoT, software, physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code, automation, you know, "Hey Alexa, deploy a Kubernetes cluster." Yeah, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services, meets workloads. Can you unpack that and share your opinion on what you see there from an Amazon perspective and how it relates to this? >> Yeah. Yeah, totally, right? And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware Cloud on AWS, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you want to leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's going to give you that power to do certain things, such as, for example, like how you mentioned with IoT, even with utilizing Alexa, or if there's any other service that you want to utilize, that's the joining point between both of the offerings right off the top. Though with digital transformation, right, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology even in your business. Leaders are looking to reinvent their business, they're looking to take different steps as far as pursuing a new strategy, maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. >> Okay. >> Then also- >> Oh, go ahead, finish your thought. >> No, no, no, I was going to say what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that vStor admin that's used to their on-premises environment. Now with VMware Cloud on AWS, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, you still have that methodology where you can utilize that in VMware Cloud on AWS too. >> Daniel, I want to get your thoughts on this because at Explore and after the event, as we prep for CubeCon and re:Invent coming up, the big AWS show, I had a couple conversations with a lot of the VMware customers and operators, and it's like hundreds of thousands of users and millions of people talking about and peaked on VMware, interested in VMware. The common thread was one person said, "I'm trying to figure out where I'm going to put my career in the next 10 to 15 years." And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm going to be the next cloud, but there's no like role yet. Architects, is it solution architect, SRE? So you're starting to see the psychology of the operators who now are going to try to make these career decisions. Like what am I going to work on? And then it's kind of fuzzy, but I want to get your thoughts, how would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity? And what's going to happen? >> So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills and trainings? Is everything worthless I learned over the last 15 years of my career? And the answer is to make digital transformation a success, we need not just to talk about technology, but also about process, people, and culture. And this is where VMware really can help because if you are applying VMware Cloud on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment, you can use the same managing and monitoring tools, if you have written, and many customers did this, if you have developed hundreds of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware Cloud on AWS. And that gives not just leaders, but also the architects at customers, the operators at customers, the confidence in such a complex project. >> The consistency, very key point, gives them the confidence to go. And then now that once they're confident, they can start committing themselves to new things. Samir, you're reacting to this because on your side, you've got higher level services, you've got more performance at the hardware level. I mean, a lot improvements. So, okay, nothing's changed, I can still run my job, now I got goodness on the other side. What's the upside? What's in it for the customer there? >> Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware Cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud. But if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you want to utilize any other AWS service in conjunction with that VM that resides maybe on-premises or even in VMware Cloud on AWS, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you want to expand on the skills, you certainly have that capability to do so. >> Great stuff, I love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, 'cause people want to know what's goes on behind the scenes. How does innovation get happen? How does it happen with the relationships? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? Do you guys just have a Zoom meeting, do you guys fly out, you write code, go do you ship things? I mean, I'm making it up, but you get the idea. How does it work? What's going on behind the scenes? >> So we hope to get more frequently together in-person, but of course we had some difficulties over the last two to three years. So we are very used to Zoom conferences and Slack meetings. You always have to have the time difference in mind if you are working globally together. But what we try, for example, we have regular assembles now also in-person, geo-based, so for AMEA, for the Americas, for APJ. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >> What's interesting, you know, as events are coming back, Samir, before you weigh in this, I'll comment as theCUBE's been going back out to events, we're hearing comments like, "What pandemic? We were more productive in the pandemic." I mean, developers know how to work remotely and they've been on all the tools there, but then they get in-person, they're happy to see people, but no one's really missed the beat. I mean, it seems to be very productive, you know, workflow, not a lot of disruption. More, if anything, productivity gains. >> Agreed, right? I think one of the key things to keep in mind is even if you look at AWS's, and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said and meant earlier, right? We might have meetings at different time zones, maybe it's in-person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation in VMware Cloud on AWS as well. But one of the key things to keep in mind is yes, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology, we've been able to still communicate, work with our customers, even with VMware in between, with AWS and whatnot, we had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware Cloud on AWS Outposts, that was something that customers have been asking for. We've been able to leverage the feedback and then continue to drive innovation even around VMware Cloud on AWS Outposts. So even with the on-premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >> In our last segment we did here on this Showcase, we listed the accomplishments and they were pretty significant. I mean geo, you got the global rollouts of the relationship. It's just really been interesting and people can reference that, we won't get into it here. But I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Because again, I think right now, we're at an inflection point more than ever. What can people expect from the relationship and what's coming up with re:Invent? Can you share a little bit of kind of what's coming down the pike? >> So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked for over the last years. Whenever you are requiring additional storage to host your virtual machines, you usually in VMware Cloud on AWS, you have to add additional nodes. Now we have three different node types with different ratios of compute, storage, and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay for it. And now with two solutions which offer choice for the customers, like FS6 wanted a ONTAP and VMware Cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements, at the upcoming events. >> Samir, what's your reaction take on what's coming down on your side? >> Yeah, I think one of the key things to keep in mind is we're looking to help our customers be agile and even scaled with their needs, right? So with VMware Cloud on AWS, that's one of the key things that comes to mind, right? There are going to be announcements, innovations, and whatnot with upcoming events. But together, we're able to leverage that to advance VMware cloud on AWS. To Daniel's point, storage for example, even with host offerings. And then even with decoupling storage from compute and memory, right? Now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware Cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's going to be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events, that's going to give us the options to even advance our own services together. >> Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I want to get both of your reaction to it. And we've been bringing this up in the open conversations on theCUBE is in the old days, let's go back this generation, you had ecosystems, you had VMware had an ecosystem, AWS had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships, and they do business together and they sell each other's products or do some stuff. Now it's more about architecture, 'cause we're now in a distributed large scale environment where the role of ecosystems are intertwining and you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides, they come together. So you have this now almost a three dimensional or multidimensional ecosystem interplay. What's your thoughts on this? Because it's about the architecture, integration is a value, not so much innovations only. You got to do innovation, but when you do innovation, you got to integrate it, you got to connect it. So how do you guys see this as an architectural thing, start to see more technical business deals? >> So we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even closer to specific vendors. We are removing these obstacles. So with VMware Cloud on AWS, moving to the cloud, firstly it's not a dead end. If you decide at one point in time because of latency requirements or maybe some compliance requirements, you need to move back into on-premise, you can do this. If you decide you want to stay with some of your services on-premise and just run a couple of dedicated services in the cloud, you can do this and you can man manage it through a single pane of glass. That's quite important. So cloud is no longer a dead end, it's no longer a binary decision, whether it's on-premise or the cloud, it is the cloud. And the second thing is you can choose the best of both worlds, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware Cloud on AWS either way in a very, very fast cost effective and safe way, then you can enrich, later on enrich these virtual machines with services that are offered by AWS, more than 200 different services ranging from object-based storage, load balancing, and so on. So it's an endless, endless possibility. >> We call that super cloud in the way that we generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is kind of where cloud is right now. You guys are not commodity, amazon's completely differentiating, but there's some commodity things happen. You got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. >> Absolutely. >> And everybody wins. >> Yeah, I 100% agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it's a cross education where there might be someone who's more proficient on the cloud side with AWS, maybe more proficient with the VMware's technology. But then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud, maybe I don't know what the networking constructs are, maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware Cloud on AWS. Maybe you want to leverage any of the native AWS services or even just off the top, 200 plus AWS services, right? But it comes down to that skillset, right? So again, solutions architecture at the back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the day. >> I mean, I just think it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean you don't have to do anything. You still run it. Just spear the way you're working on it and now do new things. This is kind of a cultural shift. >> Yeah, absolutely. And if you look, not every customer, not every organization has the resources to refactor and re-platform everything. And we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time, they can free up resources to develop new innovations and grow their business. >> Awesome. Samir, thank you for coming on. Daniel, thank you for coming to Germany. >> Thank you. Oktoberfest, I know it's evening over there, weekend's here. And thank you for spending the time. Samir, give you the final word. AWS re:Invent's coming up. We're preparing, we're going to have an exclusive with Adam, with Fryer, we'd do a curtain raise, and do a little preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at re:Invent this year? The big show? >> Yeah, so I think Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what are called chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking to sit and listen to a session, yes that's there, but if they want to be hands-on, that is also there too. So personally for me as an IT background, been in sysadmin world and whatnot, being hands-on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. >> Yeah, and re:Invent's an amazing show for the in-person. You guys nail it every year. We'll have three sets this year at theCUBE and it's becoming popular. We have more and more content. You guys got live streams going on, a lot of content, a lot of media. So thanks for sharing that. Samir, Daniel, thank you for coming on on this part of the Showcase episode of really the customer successes with VMware Cloud on AWS, really accelerating business transformation with AWS and VMware. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)
SUMMARY :
This is the customer successes Great to have you guys both on. One of the things to keep in mind Daniel, I want to get to you in a second And over the time, we really that the ops teams are in the ITOps area. And so when you look at So that's going to give you even with logging, you in the next 10 to 15 years." And the answer is to make What's in it for the customer there? and that ability to just I'd love to have you guys explain, and to contribute to our community. but no one's really missed the beat. So the key thing is always to maintain But I will ask you guys to comment on, and memory and you have to pay for it. So it comes down to, you know, and you guys are in the is you can choose the best with you on their terms. on the cloud side with AWS, I mean you don't have to do anything. has the resources to refactor Samir, thank you for coming on. And thank you for spending the time. that's one of the key things of really the customer successes
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Daniel Rethmeier | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Samir | PERSON | 0.99+ |
Maryland | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
Samir Kadoo | PERSON | 0.99+ |
more than 200 different services | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
two solutions | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
CubeCon | EVENT | 0.99+ |