Christopher Voss, Microsoft | Kubecon + Cloudnativecon Europe 2022
>> theCUBE presents KubeCon and CloudNativeCon, Europe, 2022. Brought to you by Red Hat, the cloud-native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in KubeCon, CloudNativeCon, Europe, 2022. I'm Keith Townsend with my cohosts, Enrico Signoretti, Senior IT Analyst at GigaOm. >> Exactly. >> 7,500 people I'm told, Enrico. What's the flavor of the show so far? >> It's a fantastic mood, I mean, I found a lot of people wanting to track, talk about what they're doing with Kubernetes, sharing their you know, stories, some war stories that bit tough. And you know, this is where you learn actually. Because we had a lot of Zoom calls, webinar and stuff. But it is when you talk a video, "Oh, I did it this way, and it didn't work out very well." So, and, you start a conversation like this that is really different from learning from Zoom, when, you know, everybody talks about things that work it well, they did it right. No, it's here that you learn from other experiences. >> So we're talking to amazing people the whole week, talking about those experiences here on theCUBE. Fresh on the theCUBE for the first time, Chris Voss, senior software engineer at Microsoft Xbox. Chris, welcome to the theCUBE. >> Thank you so much for having me. >> So first off, give us a high level picture of the environment that you're running at Microsoft. >> Yeah. So, you know, we've got 20 well probably close to 30 clusters at this point around the globe, you know 700 to 1,000 pods per cluster, roughly. So about 22,000 pods total. So yeah, it's pretty, pretty sizable footprint and yeah. So we've been running on Kubernetes since 2018 and well actually might be 2017, but anyways, so yeah, that's kind of our footprint. Yeah. >> So all of that, let's talk about the basics which is security across multiple I'm assuming containers, microservices, etcetera. Why did you and the team settle on Linkerd? >> Yeah, so previously we had our own kind of solution for managing TLS certs and things like that. And we found it to be pretty painful, pretty quickly. And so we knew, you know we wanted something that was a little bit more abstracted away from the developers and things like that, that allowed us to move quickly. And so we began investigating, you know, solutions to that. And a few of our colleagues went to Kubecon in San Diego in 2019, Cloudnativecon as well. And basically they just, you know, sponged it all up. And actually funny enough, my old manager was one of the people who was there and he went to the Linkerd booth and they had a thing going that was like, "Hey, get set up with MTLS in five minutes." And he was like, "This is something we want to do, why not check this out?" And he was able to do it. And so that put it on our radar. And so yeah, we investigated several others and Linkerd just perfectly fit exactly what we needed. >> So, in general we are talking about, you know, security at scale. So how you manage security scale and also flexibility. Right? So, but you know, what is the... You told us about the five minutes to start using there but you know, again, we are talking about war stories. We're talking about, you know, all these. So what kind of challenges you found at the beginning when you started adopting this technology? >> So the biggest ones were around getting up and running with like a new service, especially in the beginning, right, we were, you know, adding a new service almost every day. It felt like. And so, you know, basically it took someone going through a whole bunch of different repos, getting approvals from everyone to get the certs minted, all that fun stuff getting them put into the right environments and in the right clusters, to make sure that, you know, everybody is talking appropriately. And just the amount of work that that took alone was just a huge headache and a huge barrier to entry for us to, quickly move up the number of services we have. >> So, I'm trying to wrap my head around the scale of the challenge. When I think about certification or certificate management, I have to do it on a small scale. And every now and again, when a certificate expires it is just a troubleshooting pain. >> Yes. >> So as I think about that, it costs it's not just certificates across 22,000 pods, or it's certificates across 22,000 pods in multiple applications. How were you doing that before Linkerd? Like, what was the... And what were the pain points? Like what happens when a certificate either fails? Or expired up? Not updated? >> So, I mean, to be completely honest, the biggest thing is we're just unable to make the calls, you know, out or in, based on yeah, what is failing basically. But, you know, we saw essentially an uptick in failures around a certain service and pretty quickly, pretty quickly, we got used to the fact that it was like, oh, it's probably a cert expiration issue. And so we tried, you know, a few things in order to make that a little bit more automated and things like that. But we never came to a solution that like didn't require every engineer on the team to know essentially quite a bit about this, just to get into it, which was a huge issue. >> So talk about day two, after you've deployed Linkerd, how did this alleviate software engineers? And what was like the benefits of now having this automated way of managing certs? >> So the biggest thing is like, there is no touch from developers, everyone on our team... Well, I mean, there are a lot of people who are familiar with security and certs and all of that stuff. But no one has to know it. Like it's not a requirement. Like for instance, I knew nothing about it when I joined the team. And even when I was setting up our newer clusters, I knew very little about it. And I was still able to really quickly set up Linkerd, which was really nice. And it's been, you know, essentially we've been able to just kind of set it, and not think about it too much. Obviously, you know, there're parts of it that you have to think about, we monitor it and all that fun stuff, but yeah, it's been pretty painless almost day one. It took a long time to trust it for developers. You know, anytime there was a failure, it's like, "Oh, could this be Linkerd?" you know. But after a while, like now we don't have that immediate assumption because people have built up that trust, but. >> Also you have this massive infrastructure I mean, 30 clusters. So, I guess, that it's quite different to manage a single cluster in 30. So what are the, you know, consideration that you have to do to install this software on, you know, 30 different cluster, manage different, you know versions probably, et cetera, et cetera, et cetera. >> So, I mean, you know, as far as like... I guess, just to clarify, are you asking specifically with Linkerd? Or are you just asking in more in general? >> Well, I mean, you can take that the question in two ways. >> Okay. >> Sure, yeah, so Linkerd in particular but the 30 cluster also quite interesting. >> Yeah. So, I mean, you know, more generally, you know how we manage our clusters and things like that. We have, you know, a CLI tool that we use in order to like change context very quickly, and switch and communicate with whatever cluster we're trying to connect to and you know, are we debugging or getting logs, whatever. And then, you know, with Linkerd it's nice because again, you know, we aren't having to worry about like, oh, how is this cert being inserted in the right node? Or not the right node, but in the right cluster or things like that. Whereas with Linkerd, we don't really have that concern. When we spin up our clusters, essentially we get the route certificate and everything like that packaged up, passed along to Linkerd on installation. And then essentially, there's not much we have to do after that. >> So talk to me about your upcoming section here at Kubecon. what's the high level talking points? Like what attendees learn? >> Yeah. So it's a journey. Those are the sorts of talks that I find useful. Having not been, you know, I'm not a deep Kubernetes expert from, you know decades or whatever of experience, but-- >> I think nobody is. >> (indistinct). >> True, yes. >> That's also true. >> That's another story >> That's a job posting decades of requirements for-- >> Of course, yeah. But so, you know, it's a journey. It's really just like, hey, what made us decide on a service mesh in the first place? What made us choose Linkerd? And then what are the ways in which, you know, we use Linkerd? So what are those, you know we use some of the extra plugins and things like that. And then finally, a little bit about more what we're going to do in the future. >> Let's talk about not just necessarily the future as in two or three days from now, or two or three years from now. Well, the future after you immediately solve the low level problems with Linkerd, what were some of the surprises? Because Linkerd in service mesh and in general have side benefits. Do you experience any of those side benefits as well? >> Yeah, it's funny, you know, writing the blog post, you know, I hadn't really looked at a lot of the data in years on, you know when we did our investigations and things like that. And we had seen that we like had very low latency and low CPU utilization and things like that. And looking at some of that, I found that we were actually saving time off of requests. And I couldn't really think of why that was and I was talking with someone else and the biggest, unfortunately all that data's gone now, like the source data. So I can't go back and verify this but it makes sense, you know, there's the availability zone routing that Linkerd supports. And so I think that's actually doing it where, you know essentially, if a node is closer to another node, it's essentially, you know, routing to those ones. So when one service is talking to another service and maybe they're on the same node, you know, it short circuits that and allows us to gain some time there. It's not huge, but it adds up after, you know, 10, 20 calls down the line. >> Right. In general, so you are saying that it's smooth operations at this very, you know, simplifying your life. >> And again, we didn't have to really do anything for that. It handled that for us. >> It was there? >> Yep. Yeah, exactly. >> So we know one thing when I do it on my laptop it works fine. When I do it with across 22,000 pods, that's a different experience. What were some of the lessons learned coming out of Kubecon 2018 in San Diego? I was there. I wish I would've ran into the Microsoft folks, but what were some of the hard lessons learned scaling Linkerd across the 22,000 nodes? >> So, you know, the first one and this seems pretty obvious, but was just not something I knew about was the high availability mode of Linkerd. So obviously makes sense. You would want that in, you know a large scale environment. So like, that's one of the big lessons that like, we didn't ride away. No. Like one of the mistakes we made in one of our pre-production clusters was not turning that on. And we were kind of surprised. We were like, whoa, like all of these pods are spinning up but they're having issues, like actually getting injected and things like that. And we found, oh, okay. Yeah, you need to actually give it some more resources. But it's still very lightweight considering, you know, they have high availability mode but it's just a few instances still. >> So from, even from, you know, binary perspective and running Linkerd how much overhead is it? >> That is a great question. So I don't remember off the top of my head, the numbers but it's very lightweight. We evaluated a few different service missions and it was the lightest weight that we encountered at that point. >> And then from a resource perspective, is it a team of Linkerd people? Is it a couple of people? Like how? >> To be completely honest for a long time, it was one person Abraham, who actually is the person who proposed this talk. He couldn't make it to Valencia, but he essentially did probably 95% of the work to get into production. And then this was before, we even had a team dedicated to our infrastructure. And so we have, now we have a team dedicated, we're all kind of Linkerd folks, if not Linkerd experts, we at least can troubleshoot basically. And things like that. So it's, I think a group of six people on our team and then, you know various people who've had experience with it on other teams. >> But others, dedicated just to that. >> No one is dedicated just to it. No, it's pretty like pretty light touch once it's up and running. It took a very long time for us to really understand it and to, you know, get like not getting started, but like getting to where we really felt comfortable letting it go in production. But once it was there, like, it is very, very light touch. >> Well, I really appreciate you stopping by Chris. It's been an amazing conversation to hear how Microsoft is using a open source project. >> Exactly. >> At scale, it's just a few years ago when you would've heard the concept of Microsoft and open source together and like OS, just, you know-- >> They have changed a lot in the last few years. Now, there are huge contributors. And, you know, if you go to Azure, it's full of open source stuff, everywhere so. >> Yeah. >> Wow. The Kubecon 2022, how the world has changed in so many ways. From Valencia Spain, I'm Keith Townsend, along with Enrico Signoretti. You're watching theCUBE, the leader in high tech coverage. (upbeat music)
SUMMARY :
Brought to you by Red Hat, Welcome to Valencia, Spain What's the flavor of the show so far? And you know, this is Fresh on the theCUBE for the first time, of the environment that at this point around the globe, you know Why did you and the And so we knew, you know So, but you know, what is the... right, we were, you know, I have to do it on a small scale. How were you doing that before Linkerd? And so we tried, you know, And it's been, you know, So what are the, you know, So, I mean, you know, as far as like... Well, I mean, you can take that but the 30 cluster also quite interesting. And then, you know, with Linkerd So talk to me about Having not been, you know, But so, you know, you immediately solve but it makes sense, you know, you know, simplifying your life. And again, we didn't have So we know one thing So, you know, the first one and it was the lightest and then, you know dedicated just to that. and to, you know, get you stopping by Chris. And, you know, if you go to Azure, how the world has changed in so many ways.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Enrico | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Enrico Signoretti | PERSON | 0.99+ |
Christopher Voss | PERSON | 0.99+ |
Chris Voss | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
700 | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Linkerd | ORGANIZATION | 0.99+ |
San Diego | LOCATION | 0.99+ |
30 clusters | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Abraham | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
20 | QUANTITY | 0.99+ |
Valencia | LOCATION | 0.99+ |
six people | QUANTITY | 0.99+ |
22,000 pods | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Valencia, Spain | LOCATION | 0.99+ |
Valencia Spain | LOCATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
7,500 people | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
1,000 pods | QUANTITY | 0.99+ |
two ways | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
CloudNativeCon | EVENT | 0.98+ |
Enrico Signore | PERSON | 0.98+ |
three days | QUANTITY | 0.98+ |
GigaOm | ORGANIZATION | 0.98+ |
two | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Cloudnativecon | ORGANIZATION | 0.97+ |
one service | QUANTITY | 0.97+ |
Kubecon | ORGANIZATION | 0.97+ |
three years | QUANTITY | 0.97+ |
30 different cluster | QUANTITY | 0.96+ |
first one | QUANTITY | 0.96+ |
22,000 nodes | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
30 cluster | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Xbox | COMMERCIAL_ITEM | 0.93+ |
about 22,000 pods | QUANTITY | 0.92+ |
single cluster | QUANTITY | 0.92+ |
20 calls | QUANTITY | 0.91+ |
day two | QUANTITY | 0.91+ |
one person | QUANTITY | 0.89+ |
few years ago | DATE | 0.88+ |
decades | QUANTITY | 0.87+ |
2022 | DATE | 0.85+ |
Azure | TITLE | 0.79+ |
Kubernetes | TITLE | 0.77+ |
Christopher Voss, Microsoft | Kubecon + Cloudnativecon Europe 2022
>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Valencia Spain in co con cloud native con Europe, 2022. I'm Keith Townsend with my cohos on Rico senior. Etti senior it analyst at gig home. Exactly 7,500 people I'm told en Rico. What's the flavor of the show so far, >>It's a fantastic mood. I mean, I found a lot of people wanting to track talk about what they're doing with Kubernetes, sharing their, you know, stories, some word stories that meet tough. And you know, this is where you learn actually, because we had a lot of zoom calls, webinar and stuff, but it is when you talk a video, oh, I did it this way and it didn't work out very well. So, and, and you start a conversation like this that is really different from learning from zoom. When, you know, everybody talks about things that working well, they did it, right. No, it's here that you learn from other experiences. >>So we're talking to amazing people the whole week, talking about those experiences here on the queue, fresh on the queue for the first time, Chris Vos, senior software engineer at Microsoft Xbox, Chris, welcome to the queue. >>Thank you so much for having >>Me. So first off, give us a high level picture of the environment that you're running at Microsoft. >>Yeah. So, you know, we've got 20, well probably close to 30 clusters at this point around the globe, you know, 700 to a thousand pods per cluster, roughly. So about 22,000 pods total. So yeah, it's pretty pretty sizable footprint and yeah. So we've been running on Kubernetes since 2018 and well actually might be 2017, but anyways, so yeah, that, that's kind of our, our footprint. >>Yeah. So all of that, let's talk about the basics, which is security across multiple I'm assuming containers, work, microservices, et cetera. Why did you and the team settle on link or do >>Yeah, so previously we had our own kind of solution for managing TLS certs and things like that. And we found it to be pretty painful pretty quickly. And so we knew, you know, we wanted something that was a little bit more abstracted away from the developers and, and things like that that allowed us to move quickly. And so we began investigating, you know, solutions to that. And a few of our colleagues went to Cuban in San Diego in 2019 cloud native con as well. And basically they just, you know, sped it all up. And actually funny enough, my, my old manager was one of the people who was there and he went to the link D booth and they had a thing going that was like, Hey, get set up with MTLS in five minutes. And he was like, this is something we want to do, why not check this out? And he was able to do it. And so that, that put it on our radar. And so yeah, we investigated several others and Leer D just perfectly fit exactly what we needed. >>So, so in general, we are talking about, you know, security at scale. So how you manage security to scale and also flexibility, right. But you know, what is the you, this there, you told us about the five minutes to start using there, but you know, again, we are talking about word stories. We talk about, you know, all these. So what, what, what kind of challenges you found at the beginning when you start adopting this technology? >>So the biggest ones were around getting up and running with like a new service, especially in the beginning, right. We were, you know, adding a new service almost every day. It felt like. And so, you know, basically it took someone going through a whole bunch of different repos, getting approvals from everyone to get the SEARCHs minted, all that fun stuff, getting them put into the right environments and in the right clusters to make sure that, you know, everybody is talking appropriately. And just the amount of work that, that took alone was just a huge headache and a huge barrier to entry for us to, you know, quickly move up the number of services we have. So, >>So I'm, I'm trying to wrap my head around the scale of the challenge. When I think about certification or certificate management, I have to do it on a small scale and the, the, every now and again, when a certificate expires, it is just a troubleshooting pain. Yes. So as I think about that, it costs, it's not just certificates across 22,000 pods or it's certificates across 22,000 pods in multiple applications. How were you doing that before link D like, what was the, what and what were the pain points? Like? What happens when a certificate either fails or expired up not, not updated? >>So, I mean, to be completely honest, the biggest thing is we're just unable to make the calls, you know, out or, or in, based on yeah. What is failing basically. But, you know, we saw essentially an uptick in failures around a certain service and pretty quickly, I pretty quickly, we got used to the fact that it was like, oh, it's probably a cert expiration issue. And so we tried, you know, a few things in order to make that a little bit more automated and things like that, but we never came to a solution that like didn't require every engineer on the team to know essentially quite a bit about this, just to get into it, which was a huge issue. >>So talk about day two after you've deployed link D how did this alleviate software engineers and what was like the, the benefits of now having this automated way of managing >>Certs? So the biggest thing is like, there is no touch from developers, everyone on our team. Well, I mean, there are a lot of people who are familiar with security and certs and all of that stuff, but no one has to know it. Like it's not a requirement. Like for instance, I knew nothing about it when I joined the team. And even when I was setting up our newer clusters, I knew very little about it. And I was still able to really quickly set up blinker D, which was really nice. And, and it's been, you know, essentially we've been able to just kind of set it and not think about it too much. Obviously, you know, there are parts of it that you have to think about. We monitor it and all that fun stuff, but, but yeah, it's been pretty painless almost day one. It took a lot, a long time to trust it for developers. You know, anytime there was a failure, it's like, oh, could this be link or D you know, but after a while, like now we don't have that immediate assumption because people have built up that trust, but >>Also you have this massive infrastructure, I mean, 30 cluster. So I guess that it's quite different to manage a single cluster and 30. So what are the, you know, consideration that you have to do to install this software on, you know, 30 different cluster manage different, you know, versions probably etcetera, etcetera, et cetera. >>So, I mean, you know, the, the, as far as like, I guess, just to clarify, are you asking specifically with Linky or are you just asking in more in general? Well, >>I mean, you, you can take the, the question in the, in two ways, so, okay. Yeah. Yes. Link in particular, but the 30 cluster also quite interesting. >>Yeah. So, I mean, you know, more generally, you know, how we manage our clusters and things like that. We have, you know, a CLI tool that we use in order to like, change context very quickly and switch and communicate with whatever cluster we're trying to connect to and, you know, are we debugging or getting logs, whatever. And then, you know, with link D it's nice because again, you know, we, we, aren't having to worry about like, oh, how is this cert being inserted in the right node or, or not the right node, but in the right cluster or things like that. Whereas with link D we don't, we don't really have that concern when we spin up our, our clusters, essentially we get the root certificate and, and everything like that packaged up, passed along to link D on installation. And then essentially there's not much we have to do after that. >>So talk to me about your upcoming coming section here at Q con what's the, what's the high level talking points? Like what, what will attendees learn? >>Yeah. So it's, it's a journey. Those are the sorts of talks that I find useful. Having not been, you know, I, I'm not a deep Kubernetes expert from, you know, decades or whatever of experience, but I think >>Nobody is >>Also true. That's another story. That's a, that's, that's a job posting decades of requirements for >>Of course. Yeah. But so, you know, it, it's a journey it's really just like, Hey, what made us decide on a service mesh in the first place? What made us choose link D and then what are the ways in which, you know, we, we use link D so what are those, you know, we use some of the extra plugins and things like that. And then finally, a little bit about more, what we're gonna do in the future. >>Let's talk about not just necessarily the future as in two or three days from now, or two or three years from now. Well, the future after you immediately solve the, the low level problems with link D what were some of the, the surprises, because link D in service me in general has have side benefits. Do you experience any of those side benefits as well? >>Yeah, it's funny, you know, writing the, the blog post, you know, I hadn't really looked at a lot of the data in years on, you know, when we did our investigations and things like that. And we had seen that we like had very low latency and low CPU utilization and things like that. And looking at some of that, I found that we were actually saving time off of requests. And I couldn't really think of why that was, and I was talking with someone else and the biggest, unfortunately, all that data's gone now, like the source data. So I can't go back and verify this, but it, it makes sense, you know, there's the availability zone routing that linker D supports. And so I think that's actually doing it where, you know, essentially if a node is closer to another node, it's essentially, you know, routing to those ones. So when one service is talking to another service and maybe on they're on the same node, you know, it, it short circuits that, and allows us to gain some, some time there. It's not huge, but it adds up after, you know, 10, 20 calls down the line. Right. >>In general. So you are saying that it's smooth operations in, in ATS, very, you know, simplifying your life. >>And again, we didn't have to really do anything for that. It, it, it handled that for it was there. Yeah. Yep. Yeah, exactly. >>So we know one thing when I do it on my laptop, it works fine when I do it with across 22,000 pods, that's a different experience. What were some of the lessons learned coming out of KU con 2018 in San Diego was there? I wish I would've ran to the microphone folks, but what were some of the hard lessons learned scaling link D across the 22,000 nodes? >>So, you know, the, the first one, and this seems pretty obvious, but was just not something I knew about was the high availability mode of link D so obviously makes sense. You would want that in a, you know, a large scale environment. So like, that's one of the big lessons that like, we didn't ride away. No. Like one of the mistakes we made in, in one of our pre-production clusters was not turning that on. And we were kind of surprised. We were like, whoa, like all of these pods are spinning up, but they're having issues like actually getting injected and things like that. And we found, oh, okay. Yeah, you need to actually give it some, some more resources, but it's still very lightweight considering, you know, they have high availability mode, but it's just a few instances still. >>So from, even from a, you know, binary perspective and running link D how much overhead is it? >>That is a great question. So I don't remember off the top of my head, the numbers, but it's very lightweight. We, we evaluated a few different service missions and it was the lightest weight that we encountered at that point. >>And then from a resource perspective, is it a team of link D people? Is it a couple of people, like how >>To be completely honest for a long time, it was one person, Abraham who actually is the person who proposed this talk. He couldn't make it to Valencia, but he essentially did probably 95% of the work to get a into production. And then this was before we even had a team dedicated to our infrastructure. And so we have, now we have a team dedicated, we're all kind of Linky folks, if not Linky experts, we at least can troubleshoot basically. And things like that. So it's, I think a group of six people on our team, and then, you know, various people who've had experience with it >>On other teams, but I'm not dedicated just to that. >>I mean, >>No one is dedicated just to it. No, it's pretty like pretty light touch once it's, once it's up and running, it took a very long time for us to really understand it and, and to, you know, get like, not getting started, but like getting to where we really felt comfortable letting it go in production. But once it was there, like, it is very, very light touch. >>Well, I really appreciate you stopping by Chris. It's been an amazing conversation to hear how Microsoft is using a open source project. Exactly. At scale. It's just a few years ago, when you would've heard the concept of Microsoft and open source together and like, oh, that's just, you know, but >>They have changed a lot in the last few years now, there are huge contributors. And, you know, if you go to Azure, it's full of open source stuff, every >>So, yeah. Wow. The Cuban 2022, how the world has changed in so many ways from Licia Spain, I'm Keith Townsend, along with a Rico senior, you're watching the, the leader in high tech coverage.
SUMMARY :
brought to you by the cloud native computing foundation. What's the flavor of the show so far, And you know, on the queue, fresh on the queue for the first time, Chris Vos, Me. So first off, give us a high level picture of the environment that you're at this point around the globe, you know, 700 to a thousand pods per you and the team settle on link or do And so we began investigating, you know, solutions to that. So, so in general, we are talking about, you know, security at scale. And so, you know, basically it took someone going through a whole How were you doing that before link D like, what was the, what and what were the pain points? we tried, you know, a few things in order to make that a little bit more automated and things like that, You know, anytime there was a failure, it's like, oh, could this be link or D you know, but after a while, you know, consideration that you have to do to install this software on, Link in particular, but the 30 cluster also quite interesting. And then, you know, with link D it's nice Having not been, you know, I, I'm not a deep Kubernetes expert from, Also true. What made us choose link D and then what are the ways in which, you know, we, we use link D so what Well, the future after you immediately solve I hadn't really looked at a lot of the data in years on, you know, when we did our investigations and very, you know, simplifying your life. And again, we didn't have to really do anything for that. So we know one thing when I do it on my laptop, it works fine when I do it with across 22,000 So, you know, the, the first one, and this seems pretty obvious, but was just not something I knew about was So I don't remember our team, and then, you know, various people who've had experience with it you know, get like, not getting started, but like getting to where together and like, oh, that's just, you know, but you know, if you go to Azure, it's full of open source stuff, every how the world has changed in so many ways from Licia Spain,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Townsend | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Christopher Voss | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Chris Vos | PERSON | 0.99+ |
Abraham | PERSON | 0.99+ |
20 | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
700 | QUANTITY | 0.99+ |
San Diego | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
22,000 pods | QUANTITY | 0.99+ |
six people | QUANTITY | 0.99+ |
Valencia | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
two ways | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
20 calls | QUANTITY | 0.99+ |
7,500 people | QUANTITY | 0.99+ |
22,000 pods | QUANTITY | 0.99+ |
first time | QUANTITY | 0.98+ |
Cuban | LOCATION | 0.98+ |
first | QUANTITY | 0.98+ |
one service | QUANTITY | 0.98+ |
Valencia Spain | LOCATION | 0.98+ |
Europe | LOCATION | 0.98+ |
Linky | ORGANIZATION | 0.97+ |
three days | QUANTITY | 0.97+ |
2022 | DATE | 0.97+ |
one person | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
link D | ORGANIZATION | 0.96+ |
Kubecon | ORGANIZATION | 0.96+ |
30 cluster | QUANTITY | 0.96+ |
22,000 nodes | QUANTITY | 0.96+ |
KU con 2018 | EVENT | 0.95+ |
Coon | ORGANIZATION | 0.94+ |
Licia Spain | PERSON | 0.94+ |
30 clusters | QUANTITY | 0.94+ |
day two | QUANTITY | 0.92+ |
link D | OTHER | 0.92+ |
Xbox | COMMERCIAL_ITEM | 0.91+ |
Rico | ORGANIZATION | 0.91+ |
Q con | ORGANIZATION | 0.91+ |
about 22,000 pods | QUANTITY | 0.91+ |
Kubernetes | PERSON | 0.9+ |
few years ago | DATE | 0.9+ |
three years | QUANTITY | 0.89+ |
link | ORGANIZATION | 0.86+ |
single cluster | QUANTITY | 0.85+ |
one thing | QUANTITY | 0.82+ |
Leer D | ORGANIZATION | 0.79+ |
a thousand pods | QUANTITY | 0.77+ |
Cloudnativecon | ORGANIZATION | 0.75+ |
last | DATE | 0.74+ |
cluster | QUANTITY | 0.74+ |
MTLS | ORGANIZATION | 0.72+ |
Etti | ORGANIZATION | 0.72+ |
Azure | TITLE | 0.71+ |
Rico | LOCATION | 0.69+ |
ATS | ORGANIZATION | 0.68+ |
years | DATE | 0.64+ |
cloud native con | ORGANIZATION | 0.61+ |
Cuban | PERSON | 0.6+ |
day one | QUANTITY | 0.59+ |
decades | QUANTITY | 0.56+ |
link | OTHER | 0.56+ |
Kubernetes | ORGANIZATION | 0.53+ |
link | TITLE | 0.52+ |
22 | EVENT | 0.5+ |