Image Title

Search Results for Conde Nast:

Steven Hillion & Jeff Fletcher, Astronomer | AWS Startup Showcase S3E1


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI/ML Top Startups Building Foundation Model Infrastructure. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem to talk about data and analytics. I'm your host, Lisa Martin and today we're excited to be joined by two guests from Astronomer. Steven Hillion joins us, it's Chief Data Officer and Jeff Fletcher, it's director of ML. They're here to talk about machine learning and data orchestration. Guys, thank you so much for joining us today. >> Thank you. >> It's great to be here. >> Before we get into machine learning let's give the audience an overview of Astronomer. Talk about what that is, Steven. Talk about what you mean by data orchestration. >> Yeah, let's start with Astronomer. We're the Airflow company basically. The commercial developer behind the open-source project, Apache Airflow. I don't know if you've heard of Airflow. It's sort of de-facto standard these days for orchestrating data pipelines, data engineering pipelines, and as we'll talk about later, machine learning pipelines. It's really is the de-facto standard. I think we're up to about 12 million downloads a month. That's actually as a open-source project. I think at this point it's more popular by some measures than Slack. Airflow was created by Airbnb some years ago to manage all of their data pipelines and manage all of their workflows and now it powers the data ecosystem for organizations as diverse as Electronic Arts, Conde Nast is one of our big customers, a big user of Airflow. And also not to mention the biggest banks on Wall Street use Airflow and Astronomer to power the flow of data throughout their organizations. >> Talk about that a little bit more, Steven, in terms of the business impact. You mentioned some great customer names there. What is the business impact or outcomes that a data orchestration strategy enables businesses to achieve? >> Yeah, I mean, at the heart of it is quite simply, scheduling and managing data pipelines. And so if you have some enormous retailer who's managing the flow of information throughout their organization they may literally have thousands or even tens of thousands of data pipelines that need to execute every day to do things as simple as delivering metrics for the executives to consume at the end of the day, to producing on a weekly basis new machine learning models that can be used to drive product recommendations. One of our customers, for example, is a British food delivery service. And you get those recommendations in your application that says, "Well, maybe you want to have samosas with your curry." That sort of thing is powered by machine learning models that they train on a regular basis to reflect changing conditions in the market. And those are produced through Airflow and through the Astronomer platform, which is essentially a managed platform for running airflow. So at its simplest it really is just scheduling and managing those workflows. But that's easier said than done of course. I mean if you have 10 thousands of those things then you need to make sure that they all run that they all have sufficient compute resources. If things fail, how do you track those down across those 10,000 workflows? How easy is it for an average data scientist or data engineer to contribute their code, their Python notebooks or their SQL code into a production environment? And then you've got reproducibility, governance, auditing, like managing data flows across an organization which we think of as orchestrating them is much more than just scheduling. It becomes really complicated pretty quickly. >> I imagine there's a fair amount of complexity there. Jeff, let's bring you into the conversation. Talk a little bit about Astronomer through your lens, data orchestration and how it applies to MLOps. >> So I come from a machine learning background and for me the interesting part is that machine learning requires the expansion into orchestration. A lot of the same things that you're using to go and develop and build pipelines in a standard data orchestration space applies equally well in a machine learning orchestration space. What you're doing is you're moving data between different locations, between different tools, and then tasking different types of tools to act on that data. So extending it made logical sense from a implementation perspective. And a lot of my focus at Astronomer is really to explain how Airflow can be used well in a machine learning context. It is being used well, it is being used a lot by the customers that we have and also by users of the open source version. But it's really being able to explain to people why it's a natural extension for it and how well it fits into that. And a lot of it is also extending some of the infrastructure capabilities that Astronomer provides to those customers for them to be able to run some of the more platform specific requirements that come with doing machine learning pipelines. >> Let's get into some of the things that make Astronomer unique. Jeff, sticking with you, when you're in customer conversations, what are some of the key differentiators that you articulate to customers? >> So a lot of it is that we are not specific to one cloud provider. So we have the ability to operate across all of the big cloud providers. I know, I'm certain we have the best developers that understand how best practices implementations for data orchestration works. So we spend a lot of time talking to not just the business outcomes and the business users of the product, but also also for the technical people, how to help them better implement things that they may have come across on a Stack Overflow article or not necessarily just grown with how the product has migrated. So it's the ability to run it wherever you need to run it and also our ability to help you, the customer, better implement and understand those workflows that I think are two of the primary differentiators that we have. >> Lisa: Got it. >> I'll add another one if you don't mind. >> You can go ahead, Steven. >> Is lineage and dependencies between workflows. One thing we've done is to augment core Airflow with Lineage services. So using the Open Lineage framework, another open source framework for tracking datasets as they move from one workflow to another one, team to another, one data source to another is a really key component of what we do and we bundle that within the service so that as a developer or as a production engineer, you really don't have to worry about lineage, it just happens. Jeff, may show us some of this later that you can actually see as data flows from source through to a data warehouse out through a Python notebook to produce a predictive model or a dashboard. Can you see how those data products relate to each other? And when something goes wrong, figure out what upstream maybe caused the problem, or if you're about to change something, figure out what the impact is going to be on the rest of the organization. So Lineage is a big deal for us. >> Got it. >> And just to add on to that, the other thing to think about is that traditional Airflow is actually a complicated implementation. It required quite a lot of time spent understanding or was almost a bespoke language that you needed to be able to develop in two write these DAGs, which is like fundamental pipelines. So part of what we are focusing on is tooling that makes it more accessible to say a data analyst or a data scientist who doesn't have or really needs to gain the necessary background in how the semantics of Airflow DAGs works to still be able to get the benefit of what Airflow can do. So there is new features and capabilities built into the astronomer cloud platform that effectively obfuscates and removes the need to understand some of the deep work that goes on. But you can still do it, you still have that capability, but we are expanding it to be able to have orchestrated and repeatable processes accessible to more teams within the business. >> In terms of accessibility to more teams in the business. You talked about data scientists, data analysts, developers. Steven, I want to talk to you, as the chief data officer, are you having more and more conversations with that role and how is it emerging and evolving within your customer base? >> Hmm. That's a good question, and it is evolving because I think if you look historically at the way that Airflow has been used it's often from the ground up. You have individual data engineers or maybe single data engineering teams who adopt Airflow 'cause it's very popular. Lots of people know how to use it and they bring it into an organization and say, "Hey, let's use this to run our data pipelines." But then increasingly as you turn from pure workflow management and job scheduling to the larger topic of orchestration you realize it gets pretty complicated, you want to have coordination across teams, and you want to have standardization for the way that you manage your data pipelines. And so having a managed service for Airflow that exists in the cloud is easy to spin up as you expand usage across the organization. And thinking long term about that in the context of orchestration that's where I think the chief data officer or the head of analytics tends to get involved because they really want to think of this as a strategic investment that they're making. Not just per team individual Airflow deployments, but a network of data orchestrators. >> That network is key. Every company these days has to be a data company. We talk about companies being data driven. It's a common word, but it's true. It's whether it is a grocer or a bank or a hospital, they've got to be data companies. So talk to me a little bit about Astronomer's business model. How is this available? How do customers get their hands on it? >> Jeff, go ahead. >> Yeah, yeah. So we have a managed cloud service and we have two modes of operation. One, you can bring your own cloud infrastructure. So you can say here is an account in say, AWS or Azure and we can go and deploy the necessary infrastructure into that, or alternatively we can host everything for you. So it becomes a full SaaS offering. But we then provide a platform that connects at the backend to your internal IDP process. So however you are authenticating users to make sure that the correct people are accessing the services that they need with role-based access control. From there we are deploying through Kubernetes, the different services and capabilities into either your cloud account or into an account that we host. And from there Airflow does what Airflow does, which is its ability to then reach to different data systems and data platforms and to then run the orchestration. We make sure we do it securely, we have all the necessary compliance certifications required for GDPR in Europe and HIPAA based out of the US, and a whole bunch host of others. So it is a secure platform that can run in a place that you need it to run, but it is a managed Airflow that includes a lot of the extra capabilities like the cloud developer environment and the open lineage services to enhance the overall airflow experience. >> Enhance the overall experience. So Steven, going back to you, if I'm a Conde Nast or another organization, what are some of the key business outcomes that I can expect? As one of the things I think we've learned during the pandemic is access to realtime data is no longer a nice to have for organizations. It's really an imperative. It's that demanding consumer that wants to have that personalized, customized, instant access to a product or a service. So if I'm a Conde Nast or I'm one of your customers, what can I expect my business to be able to achieve as a result of data orchestration? >> Yeah, I think in a nutshell it's about providing a reliable, scalable, and easy to use service for developing and running data workflows. And talking of demanding customers, I mean, I'm actually a customer myself, as you mentioned, I'm the head of data for Astronomer. You won't be surprised to hear that we actually use Astronomer and Airflow to run all of our data pipelines. And so I can actually talk about my experience. When I started I was of course familiar with Airflow, but it always seemed a little bit unapproachable to me if I was introducing that to a new team of data scientists. They don't necessarily want to have to think about learning something new. But I think because of the layers that Astronomer has provided with our Astro service around Airflow it was pretty easy for me to get up and running. Of course I've got an incentive for doing that. I work for the Airflow company, but we went from about, at the beginning of last year, about 500 data tasks that we were running on a daily basis to about 15,000 every day. We run something like a million data operations every month within my team. And so as one outcome, just the ability to spin up new production workflows essentially in a single day you go from an idea in the morning to a new dashboard or a new model in the afternoon, that's really the business outcome is just removing that friction to operationalizing your machine learning and data workflows. >> And I imagine too, oh, go ahead, Jeff. >> Yeah, I think to add to that, one of the things that becomes part of the business cycle is a repeatable capabilities for things like reporting, for things like new machine learning models. And the impediment that has existed is that it's difficult to take that from a team that's an analyst team who then provide that or a data science team that then provide that to the data engineering team who have to work the workflow all the way through. What we're trying to unlock is the ability for those teams to directly get access to scheduling and orchestrating capabilities so that a business analyst can have a new report for C-suite execs that needs to be done once a week, but the time to repeatability for that report is much shorter. So it is then immediately in the hands of the person that needs to see it. It doesn't have to go into a long list of to-dos for a data engineering team that's already overworked that they eventually get it to it in a month's time. So that is also a part of it is that the realizing, orchestration I think is fairly well and a lot of people get the benefit of being able to orchestrate things within a business, but it's having more people be able to do it and shorten the time that that repeatability is there is one of the main benefits from good managed orchestration. >> So a lot of workforce productivity improvements in what you're doing to simplify things, giving more people access to data to be able to make those faster decisions, which ultimately helps the end user on the other end to get that product or the service that they're expecting like that. Jeff, I understand you have a demo that you can share so we can kind of dig into this. >> Yeah, let me take you through a quick look of how the whole thing works. So our starting point is our cloud infrastructure. This is the login. You go to the portal. You can see there's a a bunch of workspaces that are available. Workspaces are like individual places for people to operate in. I'm not going to delve into all the deep technical details here, but starting point for a lot of our data science customers is we have what we call our Cloud IDE, which is a web-based development environment for writing and building out DAGs without actually having to know how the underpinnings of Airflow work. This is an internal one, something that we use. You have a notebook-like interface that lets you write python code and SQL code and a bunch of specific bespoke type of blocks if you want. They all get pulled together and create a workflow. So this is a workflow, which gets compiled to something that looks like a complicated set of Python code, which is the DAG. I then have a CICD process pipeline where I commit this through to my GitHub repo. So this comes to a repo here, which is where these DAGs that I created in the previous step exist. I can then go and say, all right, I want to see how those particular DAGs have been running. We then get to the actual Airflow part. So this is the managed Airflow component. So we add the ability for teams to fairly easily bring up an Airflow instance and write code inside our notebook-like environment to get it into that instance. So you can see it's been running. That same process that we built here that graph ends up here inside this, but you don't need to know how the fundamentals of Airflow work in order to get this going. Then we can run one of these, it runs in the background and we can manage how it goes. And from there, every time this runs, it's emitting to a process underneath, which is the open lineage service, which is the lineage integration that allows me to come in here and have a look and see this was that actual, that same graph that we built, but now it's the historic version. So I know where things started, where things are going, and how it ran. And then I can also do a comparison. So if I want to see how this particular run worked compared to one historically, I can grab one from a previous date and it will show me the comparison between the two. So that combination of managed Airflow, getting Airflow up and running very quickly, but the Cloud IDE that lets you write code and know how to get something into a repeatable format get that into Airflow and have that attached to the lineage process adds what is a complete end-to-end orchestration process for any business looking to get the benefit from orchestration. >> Outstanding. Thank you so much Jeff for digging into that. So one of my last questions, Steven is for you. This is exciting. There's a lot that you guys are enabling organizations to achieve here to really become data-driven companies. So where can folks go to get their hands on this? >> Yeah, just go to astronomer.io and we have plenty of resources. If you're new to Airflow, you can read our documentation, our guides to getting started. We have a CLI that you can download that is really I think the easiest way to get started with Airflow. But you can actually sign up for a trial. You can sign up for a guided trial where our teams, we have a team of experts, really the world experts on getting Airflow up and running. And they'll take you through that trial and allow you to actually kick the tires and see how this works with your data. And I think you'll see pretty quickly that it's very easy to get started with Airflow, whether you're doing that from the command line or doing that in our cloud service. And all of that is available on our website >> astronomer.io. Jeff, last question for you. What are you excited about? There's so much going on here. What are some of the things, maybe you can give us a sneak peek coming down the road here that prospects and existing customers should be excited about? >> I think a lot of the development around the data awareness components, so one of the things that's traditionally been complicated with orchestration is you leave your data in the place that you're operating on and we're starting to have more data processing capability being built into Airflow. And from a Astronomer perspective, we are adding more capabilities around working with larger datasets, doing bigger data manipulation with inside the Airflow process itself. And that lends itself to better machine learning implementation. So as we start to grow and as we start to get better in the machine learning context, well, in the data awareness context, it unlocks a lot more capability to do and implement proper machine learning pipelines. >> Awesome guys. Exciting stuff. Thank you so much for talking to me about Astronomer, machine learning, data orchestration, and really the value in it for your customers. Steve and Jeff, we appreciate your time. >> Thank you. >> My pleasure, thanks. >> And we thank you for watching. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem. I'm your host, Lisa Martin. You're watching theCUBE, the leader in live tech coverage. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the AWS Startup Showcase let's give the audience and now it powers the data ecosystem What is the business impact or outcomes for the executives to consume how it applies to MLOps. and for me the interesting that you articulate to customers? So it's the ability to run it if you don't mind. that you can actually see as data flows the other thing to think about to more teams in the business. about that in the context of orchestration So talk to me a little bit at the backend to your So Steven, going back to you, just the ability to spin up but the time to repeatability a demo that you can share that allows me to come There's a lot that you guys We have a CLI that you can download What are some of the things, in the place that you're operating on and really the value in And we thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Lisa MartinPERSON

0.99+

Jeff FletcherPERSON

0.99+

StevenPERSON

0.99+

StevePERSON

0.99+

Steven HillionPERSON

0.99+

LisaPERSON

0.99+

EuropeLOCATION

0.99+

Conde NastORGANIZATION

0.99+

USLOCATION

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

HIPAATITLE

0.99+

AWSORGANIZATION

0.99+

two guestsQUANTITY

0.99+

AirflowORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

10 thousandsQUANTITY

0.99+

OneQUANTITY

0.99+

Electronic ArtsORGANIZATION

0.99+

oneQUANTITY

0.99+

PythonTITLE

0.99+

two modesQUANTITY

0.99+

AirflowTITLE

0.98+

10,000 workflowsQUANTITY

0.98+

about 500 data tasksQUANTITY

0.98+

todayDATE

0.98+

one outcomeQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

GDPRTITLE

0.97+

SQLTITLE

0.97+

GitHubORGANIZATION

0.96+

astronomer.ioOTHER

0.94+

SlackORGANIZATION

0.94+

AstronomerORGANIZATION

0.94+

some years agoDATE

0.92+

once a weekQUANTITY

0.92+

AstronomerTITLE

0.92+

theCUBEORGANIZATION

0.92+

last yearDATE

0.91+

KubernetesTITLE

0.88+

single dayQUANTITY

0.87+

about 15,000 every dayQUANTITY

0.87+

one cloudQUANTITY

0.86+

IDETITLE

0.86+

Cheryl Hung and Katie Gamanji, CNCF | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>>from around the globe. >>It's the cube with coverage of Kublai khan and cloud Native >>Con, Europe 2021 Virtual >>brought to you by >>red hat, cloud >>Native Computing foundation >>and ecosystem partners. >>Welcome back to the cubes coverage of coupon 21 cloud native con 21 part of the C N C s annual event this year. It's Virtual. Again, I'm john Kerry host of the cube and we have two great guests from the C N C. F. Cheryl Hung VP of ecosystems and Katie Manji who's the ecosystem advocate for C N C F. Thanks for coming on. Great to see you. I wish we were in person soon, maybe in the fall. Cheryl Katie, thanks for coming on. >>Um, definitely hoping to be back in person again soon, but john great to see you and great to be back on the >>cube. You know, I have to say one of the things that really surprised me is the resilience of the community around what's been happening with the virtual in the covid. Actually, a lot of people have been, um, you know, disrupted by this, but you know, the consensus is that developers have used to been working remotely and virtually in a home and so not too much disruption, but a hell of a lot of productivity. You're seeing a lot more cloud native, um, projects, you're seeing a lot more mainstreaming and the enterprise, you're starting to see cloud growth, just a really kind of nice growth. And we've been saying for years, rising tide floats, all boats, Cheryl, but this year you're starting to see real mainstream adoption with cloud native and this has really been part of the work of the community you guys have done. So what's your take on this? Because we're going to be coming out of this Covid pretty soon. There's a post covid light at the end of the tunnel. What's your view? >>Yeah, definitely, fingers crossed on that. I mean, I would love Katie to give her view on this. In fact, because she came from Conde Nast and American Express, both huge companies that were adopting have adopted cloud Native successfully. And then in the middle of the pandemic, in the middle of Covid, she joined CN CF. So Katie really has a view from the trenches and Katie would love to hear your thoughts. >>Yeah, absolutely. Uh, definitely cloud native adoption when it comes to the tooling has been more permanent in the enterprises. And that has been confirmed of my role at American Express. That is the role I moved from towards C N C F. But the more surprising thing is that we see big companies, we see banks and financial organization that are looking to adopt open source. But more importantly, they're looking for ways to either contribute or actually to direct it more into these areas. So from that perspective, I've been pretty much at the nucleus of enterprise of the adoption of cloud Native is definitely moving, it's slow paced, but it's definitely forward moving as well. Um and now I think while I'm in the role with C N C F as an ecosystem advocate and leading the end user community, there has been definitely uh the community is growing um always intrigued to find out more about the cloud Native usage is one of the things that I find quite intriguing is the fact that not one cloud native usage, like usage of covering just one platform, which is going to be called, the face is going to be the same. So it's always intriguing to find new use cases, find those extremist cases as well, that it really pushes the community forward. >>I want to do is unpack. The end user aspect of this has been a hallmark of the CNC F for years, always been a staple of the organization. But this year, more than ever it's been, seems to be prominent as people are integrating in what about the growth? I mean from last year this year and the use and user ecosystem, how have you guys seen the growth? Is there any highlights because have any stats and or observations around how the ecosystem is growing around the end user piece? >>Sure, absolutely. I mean, I can talk directly about C N C F and the C N C F. End user community, much like everything else, you know, covid kind of slowed things down, so we're kind of not entirely surprised by that, But we're still going over 2020 and in fact just in the last few months have brought in some really, really big names like Peloton, Airbnb, Citibank, um, just some incredible organizations who are, who have really adopted card native, who have seen the success and the benefits of it. And now we're looking to give back to the community, as Katie said, get involved with open source and be more than just a passive consumer of the technologies, but actually become leaders in their own right, >>Katie talk about the dynamic of developers that end user organizations. I mean, you have been there, you're now you've been on both sides of the table if you will not to the sides of the table, it's more like a round table if you will, but community driven. But traditional, uh, end user organizations, not the early adopters, not the hyper scale is, but the ones now are really embedding hybrid, um, are changing how I t to how modern applications being built. That's a big theme in these mainstream organizations. What's the dynamic going on? What's your view? >>I think for any organization, the kind of the core, what moves the organization towards cloud Native is um pretty much being ahead of your competitors. And now we have this mass of different organization of the cloud native and that's why we see more kind of ice towards this area. So um definitely in this perspective when it comes to the technology aspect, companies are looking to deploy complex application in an easier manner, especially when it comes to pushing them to production system securely faster. Um and continuously as well. They're looking to have this competitive edge when it comes to how can they quickly respond to customer feedback? And as well they're looking for this um hybrid element that has been, has been talked about. Again, we're talking about enterprise is not just about public cloud, it's about how can we run the application security and getting both an element of data centers or private cloud as well. And now we see a lot of projects which are balancing around that age but more importantly there is adoption and where there's adoption, there is a feedback loop and that's how which represents the organic growth. >>That's awesome. Cheryl like you to define what you mean when you say end user driven open source, what does that mean? >>Mm This is a really interesting dynamic that I've seen over the last couple of years. So what we see is that more and more of the open source project, our end users who who are solving their own problems and creating their own projects and donating these back to the community. An early example of this was Envoy and lift and Yeager from Uber but Spotify also recently donated backstage, which is a developer portal which has really taken off. We've also got examples from Intuit Donating Argo. Um I'm sure there are some others that I've just forgotten. But the really interesting thing I see about this is that class classically right. Maybe a few years ago, if you were an end user organization, you get involved through a vendor, you'd go to a red hat or something and say, hey, you fix this on my behalf because you know that's what I'm paying you to do. Whereas what I see now is and user saying we want to keep this expertise in house and we want to be owners of our own kind of direction and our own fate when it comes to these open source projects. And that's been a big driver for this trend of open source and user driven, open source. >>It's really the open model is just such a great thing. And I think one of the interesting thing is that fits in with a lot of people who want to work from mission driven companies, but here there's actually a business benefit as you pointed out as in terms of the dynamic of bringing stuff to the community. This is interesting. I'm sure that the ability to do more collaboration, um, either hiring or contributing kind of increases when you have this end user dynamic because that's a pretty big decision to donate and bring something into the open source. What's the playbook though? If I'm sitting in an end user organization like american express Katie or a big company, say, hey, you know, we really developed this really killer use cases niche to us, but we want to bring it to the community. What do they do? Is there like a, like a manager? Do they knock on someone's door? Zara repo is, I mean, how does someone, I mean, how does an end user get this done? >>Mm. Um, I think one of the best resources out there is called the to do group, which is a organization underneath the Linux foundation. So it's kind of a sister group to C N C F, which is about open source program offices. And how do you formalize such an open source program? Because it's pretty easy to say, oh well just put something on get hub. But that's not the end of the story, right? Um, if you want to actually build a community, if you want other people to contribute, then you do actually have to do more than just drop it and get up and walk away. So I would say that if you are an end user company and you have created something which scratches your own itch and you think other people could benefit from it then definitely come. And like you could email me, you could email Chris and chick who is the ceo of C N C F and just get in touch and sort of ask around about what are the things that you could do in terms of what you have to think about the licensing, How do you develop a community governance program, um, trademark issues, all of these things. >>It's interesting how open source is growing so much now, chris has got so much action going on. New verticals are opening up, you know, so, so much action Cheryl you had posted on the internet predictions for cloud native, which I found interesting because there's so much action going on, you have to break things out into pillars, tech devops and ecosystem, each one kind of with a slew event of key trends. So take us through the mindset, why break it out like that? You got tech devops and ecosystem tradition that was all kind of bundled in one. Why? Why the pillars? And is it because there's so much action, what's, what's the basis behind the prediction? >>Um so originally this was just a giant list of things I had seen from talking to people and reading around and seeing what people are talking about on social media. Um And when, once I invested at these 10, I thought about what, what does this actually mean for the people who are going to look at this list and what should they care about? So I see tech trends as things related to tools, frameworks. Um, perhaps architects I see develops as people who are more as a combination of process, things that a combination of process and people and culture best practices and then ecosystem was kind of anything else broader than that. Things that happened across organizations. So you can definitely go to my twitter, you can go to at boy Chevelle, O I C H E R Y L and take a look at this and This is my list of 10. I would love to hear from you whether you agree with it, whether you think there are other things that I've missed or what would your >>table. I love. I love the top. Well, first of all I think this is very relevant. The one that I would ask you on is more rust and cloud native. That's the number one item. Um, I think cross cloud is definitely totally happening, I think people are really starting to think about that and so I'd love to get your comments on that. But I think the thing that jumped out at me was the devops piece because this is a trend that I've been seeing a lot more certainly even in academic institutions, for folks in school, right? Um going to college for computer science and engineering. This idea of, sorry, large scale, cloud is not so much an IT practice, it's much more of a cloud native mindset. So I think this idea of of ops so much more about scale. I use SRE only because I can't think of a better word around it and certainly the edge pieces with kubernetes, I think this is the, I think the biggest story to me that's where all the action seems to be when I talk to people around what they're working on in terms of training new people on boarding and what not Katie, you're shaking your head, you're like Yeah, what's your thoughts? Yeah, >>I have definitely been uh through all of these stages from having a team where the develops, I think it's more of a culture of like a pattern to adopt within an organization more than anything. So I've been pre develops within develops and actually during the evolution of it where we actually added an s every team as well. Um I think having these cultural changes with an organization, they are necessary, especially they want to iterate iterate quicker and actually deliver value to the customers with minimal agency because what it actually does there is the collaboration between teams which were initially segregated. And that's why I think there is a paradigm nowadays which is called deficit ops, which actually moves security more to its left. This has been very popular, especially in the, in the latest a couple of months. Lots of talks around it and even there is like a security co located event of Yukon just going to focus on that mainly. Um, but as well within the Devil's area, um, one of the models that has been quite permanent has been get ups as well, which pretty much uses the power of gIT repositories to describe the state of the applications, how it actually should be within the production system and within the cloud native ecosystem. There are two main tools that pretty much leave this area and there's going to be Argo City which has been donated by, into it, which is our end user And we have flux as well, which has been donated by we've works and both of these projects currently are within the incubation stage, which pretty much by default um showcases there is a lot of adoption from the organizations um more than 100 of for for some of them. So there is a wider adoption um, and everything I would like to mention is the get ups working group which has emerged I think between que con europe and north America last year and that again is more to define a manifest of how exactly get expert and should be adopted within organizations. So there is a lot of, I would say initiatives and this is further out they confirmed with the tooling that we have within the ecosystem. >>That's really awesome insight. I want to just, if you don't mind follow up on that, why is getups so important right now, Is it because the emphasis of security is that the emphasis of more scale, Is it just because it's pretty much kid was okay just because storing it over there, Is it because there's so much more inspections are going on around it? I mean code reviews have been going on for a long time. What's what's the big deal? Why is it so hot right now? In your opinion? >>I think there is definitely a couple of aspects that are quite important. You mentioned security, that's definitely one of them with the get ups battery. And there is a pool model rather than a push model. So you have the actual tool, for example, our great city of flux watching for repository and if any changes are identified is going to pull those changes automatically. So the first thing that we actually can see from this model is that we always will have a delta between what's within our depositors and the production system. Usually if you have a pool model, you can pull it uh can push the changes towards death staging environment but not always the production because you have the change window sometimes with the get ups model, you'll always be aware of what's the Dell. Can you have quite a nice way to visualize that especially for your city, which has the UI as well as well with the get ups pattern, there is less necessity to share the credentials with the actual pipeline tool. All of because Argo flux there are natively build around communities, all the secrets are going to be residing within the cluster. There is no need to share any extra credentials or an extra permissions with external tools as well. There are scale, there is again with kids who have historical data points which allows us to easily revert um to stable points of the applications in the past. So multiple, multiple benefits I would say, but definitely secured. I think it's one of the main one and it has been talked about quite a lot as well. >>A lot of these end user stories revolve around these dynamics and the ones you guys are promoting and from your members as well as in the community at large is I hate to use the word day two operations, but that really is the issue like okay, we're up and running. I want more automation. This is again tops kind of vibe here where it's like okay we gotta go troubleshoot all this, but it should be working as more stuff comes in. This becomes more and more the dynamic is that is that because of just more edges, more things, more devices, what's what's the what's the push behind all these stories around this automation and day to operation things? What do you guys think? >>I think, I think the expectations are getting higher and higher to be honest, a few years ago it was enough to use containers and start using the barest minimum, you know, to orchestrate those containers. But now what we see is that, you know, it's easy to choose the technology, it's easy to install it and even configure it. But as you said, john those data operations are really, really hard. For example, one of the ones that we've seen up and coming and we care about from CNCF is kubernetes on the edge. And we see this as enabling telco use cases and 5G and IOT and really, really broad, difficult use cases that just a few years ago would have been nice on impossible, Katie, your zone, Katie Katie, you also talk about edge. Right? >>Absolutely. I think I I really like to watch some of the talks that keep going, especially given by the big organizations that have to manage thousands or tens of thousands, hundreds of thousands of customers. And they have to deliver a cluster to these to these teams. Now, from their point of view, they pretty much have to manage clusters at scale. There is definitely the edge out there and they really kind of pushing the technology towards how can we get closer to the physical devices within the customers? Kind of uh, let's say bubble or area in surface. So age has been definitely something which has been moving a lot when it comes to the cloud native ecosystem. We've had a lot of projects moving to towards the incubation stage, carefree as has been there, um, for for a while and again, has a lot of adoption is known for its stability. But another thing that I would like to mention is that now currently we have a lot of projects that are age focus but within some box, so there is again, a lot of potential if there's gonna be a higher demand for this, I would expect this tools move from sandbox to incubation and even graduation. So that's definitely something which, uh, it's moving and there is dynamism around it. >>Well, Cheryl kid, you guys are awesome, love the work you're doing. I gotta ask the final question since you brought it up about the expectations. Cheryl, if you guys could both end the segment with the comment around expectations as the industry and companies and developers and participants continue to grow. What, what's changed with C N C F koo Kahne cloud, native khan as the expectation has been growing and the stakes are higher too, frankly, I mean you've got security, you mentioned these things edge get up, so you start to see the maturation of this ecosystem, what's new and what's expected of you guys, What do you see and how are you guys organizing? >>I think we can definitely say the ecosystem has matured a lot compared to a few years ago. Same with CNTF, same with Cuba con, I think the very first cubic on I went to was Berlin, which was about 1800 people. Um, the kind of mind boggling to see how much, how much it's grown since then. I mean one of the things that we try and do is to expand the number of people who can reach the community. So for example, we launched kubernetes community days and we launched, that means community organized events in africa, for example, for people who couldn't come to large events in north America or europe, um we also launching things to help students. I actually love talking to students because quite often now you talk to them and they say, oh, I've never run software in anything other than a container. You're like, yeah, well this was a new thing, this is brand new a few years ago and now you can be 18 and have never tried anything else. So it's pretty amazing. But yeah, there's definitely, there's always space to go to the community. >>Yeah, once you go cloud native, it's like, you know, like you've never load Lennox on them server before. I mean, what, what's going on? Get your thoughts as expectations go higher And certainly there's more in migration, not only for young folks because they're jumping into this was that engineering meets computer science is now cross discipline. You're seeing scale, you mentioned scaling up those are huge factors, you've got younger, you got cross training, you got cybersecurity and you've got Fin tech ops that's chris is working on so much is happening. What, what, what you guys keep up with your, how you gonna raise the ball? >>Absolutely. I think there's definitely technology moving forward, but I think nowadays there is a more need for actual end user stories while at the beginning of cube cons there is a lot of focus on the technical aspects. How can you fix this particular problem of deploying between two clusters are deploying at scale. There is like a lot of technical aspects nowadays they're looking for the stories because as I mentioned before, not one platform is gonna be the same when it comes to cloud native and I think there's still, the community is still trying to look for some patterns or some standards and we actually can see like especially when it comes to the open standards, we can see this moving within um the observe abilities like that application delivery will have for example cross plane and Que Bella we have open metrics and open tracing as well, which focuses on observe ability and all of the interfaces that we had around um, Cuban directory service men and so forth. All of these pretty much try to bring a benchmark, making it easier to integrate these special use cases um when it comes to actual extreme technology kind of solutions that you need to provide and um, I was mentioning the end user stories that are there more in demand nowadays mainly because these are very, very necessary from the community like for example the six or the project maintainers, they require feedback to actually move forward. And as part of that, I would like to mention that we've recently soft launched the injuries lounge, which really focuses on this particular aspect of end user stories. We try to pretty much question our end users and really understand what really moved them to adopt, coordinative, what keeps them on this path and what like future challenges they would like to um to tackle or are they facing the moment I would like to solve in the future. So we're trying to create the speed back home between the inducers and the projects out there. So I think this is something which needs to be a bit more closely together these two spheres, which currently are segregated, but we're trying to just solve that. >>Also you guys do great work, great job. Cheryl wrap us up real, take a minute to put a plug in for the C. N. C. F. In the ecosystem. What's the fashion this year? What's hot? What's the trend? What are you guys doing? Share some quick update on what's going on the ecosystem from your perspective? >>Yeah, I mean the ecosystem, even though I just said that we're maturing, you know, the growth has not stopped now, what we're seeing is these as Casey was saying, you know, more specific use cases, even bigger, even more demanding environments, even more kind of crazy use cases. I mean I love the story from the U. S. Department of Defense about putting kubernetes on their fighter jets and putting ston fighter jets, you know, it's just absurd to think about it, but I would say definitely come and be part of the community, share your stories, share what you know, help other people um if you are end user of these technologies then go to see NCF dot io slash and user and just come and be part of our community, you know, meet your peers and hear what everybody else is doing >>well. Having kubernetes and stu on jets, that's the Air Force, I would call that technical edge Katie to you know, bring, bring back the edge carol kitty, thank you so much for sharing the inside ecosystem is robust. Rising tide is floating all the boats as we always say here in the cube, it's been great to watch and continue to watch the rise. I think it's just the beginning, we're starting to see post pandemic visibility cloud native, more standards, more visibility into the economics and value and great to see the ecosystem rising up with the end users as well. So congratulations and thanks for coming up. >>Thank you so much, john it's a pleasure, appreciate >>it. Thank you for having us, john >>Great to have you on. I'm john for with the cube here for Coop Con Cloud, Native Con 21 virtual soon we'll be back in real life. Thanks for watching. Mhm.

Published Date : May 5 2021

SUMMARY :

of the C N C s annual event this year. um, you know, disrupted by this, but you know, the consensus is that developers have used to been working remotely in the middle of Covid, she joined CN CF. the face is going to be the same. and the use and user ecosystem, how have you guys seen the growth? I mean, I can talk directly about C N C F and the I mean, you have been there, They're looking to have this competitive edge when it comes Cheryl like you to define what you mean when you say end user driven open Mm This is a really interesting dynamic that I've seen over the last couple of years. I'm sure that the ability to do more collaboration, So I would say that if you are an end user company and you have for cloud native, which I found interesting because there's so much action going on, you have to break things out into pillars, I would love to hear from you whether I think the biggest story to me that's where all the action seems to be when I talk to people around what they're I think it's more of a culture of like a pattern to adopt within an organization more than anything. I want to just, if you don't mind follow up on that, why is getups so always the production because you have the change window sometimes with the get ups model, ones you guys are promoting and from your members as well as in the community at large is I you know, it's easy to choose the technology, it's easy to install it and especially given by the big organizations that have to manage thousands or tens of you guys, What do you see and how are you guys organizing? I actually love talking to students because quite often now you talk to them Yeah, once you go cloud native, it's like, you know, like you've never load Lennox on them server before. cases um when it comes to actual extreme technology kind of solutions that you need to provide and What's the fashion this year? and just come and be part of our community, you know, meet your peers and hear what everybody else is Katie to you know, bring, bring back the edge carol kitty, thank you so much for sharing the Great to have you on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KatiePERSON

0.99+

CitibankORGANIZATION

0.99+

Katie GamanjiPERSON

0.99+

AirbnbORGANIZATION

0.99+

CherylPERSON

0.99+

Katie ManjiPERSON

0.99+

Cheryl HungPERSON

0.99+

American ExpressORGANIZATION

0.99+

ChrisPERSON

0.99+

Conde NastORGANIZATION

0.99+

john KerryPERSON

0.99+

PelotonORGANIZATION

0.99+

thousandsQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

CaseyPERSON

0.99+

U. S. Department of DefenseORGANIZATION

0.99+

africaLOCATION

0.99+

last yearDATE

0.99+

north AmericaLOCATION

0.99+

UberORGANIZATION

0.99+

europeLOCATION

0.99+

johnPERSON

0.99+

18QUANTITY

0.99+

Cheryl KatiePERSON

0.99+

10QUANTITY

0.99+

bothQUANTITY

0.98+

two clustersQUANTITY

0.98+

american expressORGANIZATION

0.98+

Cuba conEVENT

0.98+

this yearDATE

0.98+

BerlinLOCATION

0.98+

one platformQUANTITY

0.98+

sixQUANTITY

0.98+

oneQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

YukonLOCATION

0.98+

DellORGANIZATION

0.98+

CNCFORGANIZATION

0.98+

both sidesQUANTITY

0.98+

CloudNativeConEVENT

0.97+

telcoORGANIZATION

0.97+

two main toolsQUANTITY

0.97+

chrisPERSON

0.97+

ZaraORGANIZATION

0.97+

more than 100QUANTITY

0.96+

C. N. C. F.LOCATION

0.96+

pandemicEVENT

0.96+

first thingQUANTITY

0.96+

CNC FORGANIZATION

0.95+

two great guestsQUANTITY

0.95+

twitterORGANIZATION

0.95+

KubeConEVENT

0.95+

about 1800 peopleQUANTITY

0.94+

two spheresQUANTITY

0.94+

red hatORGANIZATION

0.93+

each oneQUANTITY

0.93+

Katie KatiePERSON

0.93+

CubanOTHER

0.92+

few years agoDATE

0.92+

first cubicQUANTITY

0.91+

CN CF.ORGANIZATION

0.91+

Coop Con CloudEVENT

0.9+

tens of thousandsQUANTITY

0.9+

LennoxORGANIZATION

0.87+

Katie Gamanji, American Express | KubeCon + CloudNativeCon Europe 2020 - Virtual


 

>> Narrator: From around the globe, it's theCUBE. With coverage of KubeCon, and CloudNativeCon Europe 2020 virtual, brought to you by Red Hat, the Cloud Native Computing Foundation, and ecosystem partners. >> Hi, I'm Stuart Miniman, and this is theCUBE's coverage of KubeCon, CloudNativeCon, the European show, which of course for 2020 is virtual. Always love when we get to talk to the practitioners, as well as many of them heavily involved in what happens at the CNCF, you know, all these open source communities. Happy to welcome to the program, first time guest Katie Gamanji. She is a Cloud Platform Engineer with American Express, and she's also a member of the CNCF's TOC, which is the technical oversight committee. Katie, thanks so much for joining us. >> Thank you for having me today. I'm quite excited to be here. >> Excellent. Well, you are, as I mentioned, you're part of the TOC. You also present at the show last year. You presented at one of the KubeCon shows this year. As I mentioned, you were with American Express now. I believe it was Conde Nast, You shared some of the journey along those lines. Maybe for our audience, give us a little bit about, you know, your background, and what's got you involved in, you know, some of these projects in communities. >> Absolutely. Oh, such a good question. I can talk forever about that. My passion about Cloud Native. So, my name is Katie Gamanji, and I am one of the Cloud Platform Engineer for American Express. I joined American Express around five months ago, and I am part of the team that aims to transform the current platform, by embracing the Cloud Native principles, and making the best use of the open source tools. As mentioned previously, I've been working for Conde Nast. I've been in that role for almost two years. And as part of that role, we aim to create a centralized globally distributed platform that had Kubernetes as a central piece. And that was the role which actually got me involved more into the Cloud Native tooling, and I've been exploring them quite heavily since then. And that's why I wanted to get more in terms more contribution to the community. I've been doing that previously for different talks, and actually writing blog posts on different, giving different guides on how to start using some of the tooling. However, this year I decided to apply for TOC. And I've been elected as a TOC from the end user perspective, so I'm representing pretty much the overview of what end users think that the next direction should be within the Cloud Native landscape. And for the last, actually for the past five months, I've been on the TOC, with the CNCF, and it's only 11 of us. And we are in charge to make sure that we can guide, and set this technical vision for this year for the CNCF landscape. >> Yeah. Katie, I definitely want to talk about the TOC piece, but I want to back up a little bit. And you talked about some of the tooling, you talked about the community. Help me understand a little bit, you know, from a business standpoint, why you know, Conde Nast, American Express, looking towards using, Kubernetes and all of these open tours toolings. What was the charter, the challenge put before them, that felt that doing things this new way would help them. >> I think this actually goes a couple of years back. In my previous role before Conde Nast, I was in a team which aimed to provision infrastructure, but it was in a more, how can I say old fashioned manner? We had to configure our data centers manually, configure the VMs and processes. We had (indistinct) of automation. But at the time, this was maybe three years ago. I started to look into Kubertetes, and it was still baby steps, like, there was interest from the community, and I really wanted to, kind of get my hands on it more. And when I was looking for a role, which was at Conde Nast, I was looking for something which aimed to introduce containers in the entire infrastructure. And I think Conde Nast actually was very appealing as a role because not many expect for a media company to invest in technology, and actually the underlined infrastructure. So, from that perspective, I thought it's actually quite a good use case to change this perspective in the community. As well, with Conde Nast, it was a very international company. We had different business units around the world. All of them had different tech stacks. So, the challenge itself, how do we unify that? How do we centralize the deployment process of the application and serving our requests? But at the same time, have these individualized layer for every single market to still personalize their content. So, it was a very good project, I think, for me to further go into the Cloud Native to link, and actually definitely proved to be the right role for that. And currently I am in a different role. It's actually a financial company. But I think this is my personal challenge. I think there is a perception of financial companies moving towards modernization of their infrastructure, but it's still going quite slowly. And I think my personal challenge in this perspective is to make sure that actually FinTech is a thing, but FinTech in Cloud Native, actually using open source tooling is possible. Obviously, we can transition that to some of the secondary base, maybe not the core base of the business, but this transition, actually getting the change going is the most important bit. Once actual goes, it's just a boulder like, downhill, which is going to take everything around, and refactoring bit by bit. >> Yeah. Katie, you brought up a really important point. You know, in today's world, especially, you know, this year 2020 with the global pandemic going on, being able to react fast is so important regardless of what industry you're in. You talked about in your previous role, you had a global rollout to work across a lot of environments. Help us understand a little bit underneath the covers. You know, using this tool set, how does this help you move faster? How does it, you know, in some ways unify teams, regardless of what challenges they have? >> I think for us at least at Conde Nast, it was quite important to have one platform, so actually centralized all of our required, actually gather all our requirements, and translate them in within the platform. So, what we actually wanted, was to us to have Kubernetes as the gravitational point. Now, with Kubernetes, we'd have some of the main functionalities such as portability or flexibility. We'd be able to scale to very easily without, actually with minimal effort, but more importantly, we'll be able to transport our platform to different regions. So, to actually replicate the entire tech stat. So once we have these centralized platform, it was very easy for us to distribute them. For example, in regions across the US. And that time I was working there at least. There was an intentional strategy to replicate the tech stack in China. And that'll be very easy because with Kubernetes you just have this lifting shift capabilities. As long as you have BMs, you'll be able or compute, you'll be able to run the entire Conde Nast tech stack. So that was a very kind of big point for us to move to Kubernetes. Whilst I think in American Express, the strategy is completely different. It's still a lot of heritage infrastructure we have at the moment, actually we are running on Kubernetes. There is but the provider itself is Open shave This proving to be showcasing some of the issues for us moving forward, and we'd like to transition to a more neater way to run Kubernetes. And this potentially means, we haven't finalized the decision yet but it might we'd be using probably a cloud provider, or it might be the case of actually running Kubernetes self service. So we've actually got to maintain our clusters. This is not defined, but the underlying idea is that we want to be more kind of modern version of Kubernetes or managing Kubernetes moving forward. So this is one of the strategies. But I think within American express, the main underlying idea is that we really want to inner source most of the configuration. Historically we had different contractors and vendors working on our bits and pieces, we'd like to actually get all of these in house and have a centralized way to manage our infrastructure. So this is the underlying project which I think is going to take a while, but again there is an intention to include Cloud Native to link and technologies, and I think it's a very healthy thinking in terms of technology. >> Well Katie, you highlighted two really important topics that we've seen out there. Number one is exactly where my infrastructure is, it's going to change and I don't need to think about it. So you talked about public cloud, data centers, it might change in the future. And number two, making sure that you have the skill set in house. Something we definitely learnt from the outsourcing trends of the past was, when things need to be changed, if I had to rely on someone else it became very difficult. So if you're leveraging Kubernetes and you have the developer chops to be able to respond to the business in an agile way, you're going to be much more ready to be able to handle whatever happens in the future. >> Exactly >> So important. >> I want to switch and talk a little bit about your TOC work, presenting at the show. It's great to see companies enabling their employees to participate in this sort of thing. Help me understand how for you personally and what is the support that you get from your last job, your current job to participate in these open source projects in communities. >> Right. I think both of the companies, Conde Nast and American express, they're quite interested in been part of the Cloud Native community. With Conde Nast, they actually a part of end users. With American Express I think there is a thinking to actually join the end user community. So this might be something which will happen in future. I cannot guarantee but I'm hoping. This is going to be again one of my personal challenges, making sure we get in the community and share some of our used cases. But for now I think both of the companies actually understand the value of been part of actually using Open Source, but more importantly, understanding how other companies use that. Not one use case, especially when it come to Kubernetes, not one Kubernetes platform is going to be the same. There's always going to be different underlying technologies that plug in into it. There's always going to be different ways to use different tooling. And having these concentrated community and source of information, I think the companies actually understand the value in that and contributing to that. So I think, this is something which I've been quite passionate about to actually understand some of the strengths, to understand how some of the tooling are used, and if there is an actual hope for a project, or it's something which actually specialize into a very minimal kind of niche problem, and is going to be useful for maybe one or two big companies, it depends. So I think this is something I've been passionate about and I've actually had a support throughout. In my previous company and my current company I have very strong support from my higher ups to actually contribute more and be part of the end users community, and as such being a TOC as well. Which comes with a bunch of responsibilities as well. But I think in terms of either support, definitely I had the necessary support all the way through which I'm quite thankful. >> Katie, you mentioned some of your passions, I know from what I've read online that you're passionate about some of the tooling there, and that's some of what you're sharing through your presentations. So, I'd love if you could share a little bit about what we're going to be talking about at the Europe show right now and any other kind of tools that are getting your time and attention these days. >> So I think lately, I've been exploring Cluster API the new release. I've been waiting for new release. Actually everyone has been waiting for the new release for a couple of months. Now we actually have v1L for three end points with some of the cool features such as, manage control place for Cluster. And the second tool or set of toolings I'm working lately are the ones which concentrate on the Gitops model. So during the session at Kubecon in Europe this year, I will be presenting Cluster API, a guide on how to get started. So an overview of all the components necessary to create your own Clusters. In different cloud providers as well. But I will crown that presentation by delivering a demo of how can you provision your Custer with Gitops. And I'm going to use Argo CD at the moment. And the end result is going to be provisioning your Cluster in AWS by having maybe one click, and you have a Cluster refill masters, maybe five nodes and you just wait. Pretty much you can have a coffee while your Cluster is provisioning. But more importantly with Cluster API, again we have usable manifest which will allow us to have this one interface to integrate with different cloud providers. So we actually have this interoperobility Of manifest across different cloud providers. So look forward to that. >> Excellent. Katie, last question I have for you, what advice would you give your peers? Where do you see need for more participation, as people that are getting into this environment. Where do you think they can help? >> Oh such a good question. I think contribution is necessary in most of the sags In the Kubernetes community. So, I think it depends on the passion everyone has, if they're quite passionate about the networking, or storage or even service, there is going to be a group of people that have the same passion and interest with you. So please reach out and contribute. I think I never think I'll like to mention, you done necessarily need to be an active coder to be part of the sags or to be part of the Cloud Native. Because being in technology of course is an advantage, however, most of the ideas in actually making sure that we cover used cases for different tooling, comes from a diverse user base as well. So if you have an interest I think that's going to be very good engine for to further enable different ideas within the sags. So I wouldn't be able to recommend a particular project, I think this is very specific to everyone's daily role (indistinct) But yeah I think within the CNCF, we have a collection of sags for which you pretty much would find a place for yourself and your skills. >> Well Katie thank you so much for sharing your journey and participating so actively in the community. Thanks so much for joining us. >> Thank you for having me today. >> All right stay tuned much more coverage from Kubecon, CloudNativeCon Europe 2020 virtual edition, I'm Stuartt Miniman, and thank you for watching theCUBE. (gentle music)

Published Date : Aug 18 2020

SUMMARY :

brought to you by Red Hat, and she's also a member of the CNCF's TOC, I'm quite excited to be here. You shared some of the and I am part of the team talk about the TOC piece, into the Cloud Native to link, being able to react fast is so important For example, in regions across the US. it might change in the future. and what is the support that you get from and be part of the end users community, some of the tooling there, And the end result is going to what advice would you give your peers? necessary in most of the sags actively in the community. I'm Stuartt Miniman, and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Katie GamanjiPERSON

0.99+

KatiePERSON

0.99+

twoQUANTITY

0.99+

Stuart MinimanPERSON

0.99+

oneQUANTITY

0.99+

ChinaLOCATION

0.99+

Conde NastORGANIZATION

0.99+

EuropeLOCATION

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

American ExpressORGANIZATION

0.99+

USLOCATION

0.99+

CNCFORGANIZATION

0.99+

last yearDATE

0.99+

Conde NastORGANIZATION

0.99+

Stuartt MinimanPERSON

0.99+

bothQUANTITY

0.99+

one platformQUANTITY

0.99+

AWSORGANIZATION

0.99+

KubeConEVENT

0.99+

three years agoDATE

0.99+

one clickQUANTITY

0.99+

CloudNativeConEVENT

0.99+

this yearDATE

0.98+

second toolQUANTITY

0.98+

KubeconORGANIZATION

0.98+

American expressORGANIZATION

0.98+

Cloud NativeTITLE

0.98+

11QUANTITY

0.97+

KubernetesTITLE

0.96+

todayDATE

0.95+

five nodesQUANTITY

0.95+

CloudNativeCon Europe 2020EVENT

0.94+

two big companiesQUANTITY

0.93+

first timeQUANTITY

0.93+

TOCORGANIZATION

0.93+

one interfaceQUANTITY

0.9+

past five monthsDATE

0.9+

couple of years backDATE

0.88+

FinTechORGANIZATION

0.87+

CloudTITLE

0.85+

almost two yearsQUANTITY

0.85+

CloudNativeCon Europe 2020EVENT

0.81+

theCUBEORGANIZATION

0.79+

three end pointsQUANTITY

0.79+

Cluster APITITLE

0.79+

around five months agoDATE

0.78+

important topicsQUANTITY

0.77+

ClusterORGANIZATION

0.77+

Jason Bloomberg, Intellyx | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE! Covering KubeCon and CloudNativeCon Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and ecosystem partners. >> Welcome back. This is theCUBE's live coverage of KubeCon, CloudNativeCon 2019 here in Barcelona, Spain. 7,700 here in attendance, here about all the Cloud Native technologies. I'm Stu Miniman; my cohost to the two days of coverage is Corey Quinn. And to help us break down what's happening in this ecosystem, we've brought in Jason Bloomberg, who's the president at Intellyx. Jason, thanks so much for joining us. >> It's great to be here. >> All right. There's probably some things in the keynote I want to talk about, but I also want to get your general impression of the show and beyond the show, just the ecosystem here. Brian Liles came out this morning. He did not sing or rap for us this morning like he did yesterday. He did remind us that the dinners in Barcelona meant that people were a little late coming in here because, even once you've got through all of your rounds of tapas and everything like that, getting that final check might take a little while. They did eventually filter in, though. Always a fun city here in Barcelona. I found some interesting pieces. Always love some customer studies. Conde Nast talking about what they've done with their digital imprint. CERN, who we're going to have on this program. As a science lover, you want to geek out as to how they're finding the Higgs boson and how things like Kubernetes are helping them there. And digging into things like storage, which I worked at a storage company for 10 years. So, understanding that storage is hard. Well, yeah. When containers came out, I was like, "Oh, god, we just fixed it for virtualization, "and it took us a decade. "How are we going to do it this time?" And they actually quoted a crowd chat that we had in our community. Tim Hawken, of course one of the first Kubernetes guys, was in on that. And we're going to have Tim on this afternoon, too. So, just to set a little context there. Jason, what's your impressions of the show? Anything that has changed in your mind from when you came in here to today? Let's get into it from there. >> Well, this is my second KubeCon. The first one I went to was in Seattle in December. What's interesting from a big picture is really how quickly and broadly KubeCon has been adopted in the enterprise. It's still, in the broader scheme of things, relatively new, but it's really taking its place as the only container orchestrator anybody cares about. It sort of squashed the 20-or-so alternative container orchestrators that had a brief day in the sun. And furthermore, large enterprises are rapidly adopting it. It's remarkable how many of them have adopted it and how broadly, how large the deployment. The Conde Nast example was one. But there are quite a number. So we turned the corner, even though it's relatively immature technology. That's the interesting story as well, that there's still pieces missing. It's sort of like flying an airplane while you're still assembling it, which makes it that much more exciting. >> Yeah, one of the things that has excited me over the last 10 years in tech is how fast it takes me to go from ideation to production, has been shrinking. Big data was: "Let's take the thing that used to take five years "and get it down to 18 months." We all remember ERP deployments and how much money and people you need to throw at that. >> It still takes a lot of money and people. >> Right, because it's ERP. I was talking to one of the booths here, and they were doing an informal poll of, "How many of you are going to have Kubernetes "in production in the next six months?" Not testing it, but in production in the next six months, and it was more than half of the people were going to be ramping it up in that kind of environment. Anything architecturally? What's intriguing you? What's the area that you're digging down to? We know that we are not fully mature, and even though we're in production and huge growth, there's still plenty of work to do. >> An interesting thing about the audience here is it's primarily infrastructure engineers. And the show is aimed at the infrastructure engineers, so it's technical. It's focused on people who code for a living at the infrastructure level, not at the application level. So you have that overall context, and what you end up having, then, is a lot of discussions about the various components. "Here's how we do storage." "Here's how we do this, here's how we do that." And it's all these pieces that people now have to assemble, as opposed to thinking of it overall, from the broader context, which is where I like writing about, in terms of the bigger picture. So the bigger picture is really that Cloud Native, broadly speaking, is a new architectural paradigm. It's more than just an architectural trend. It's set of trends that really change the way we think about architecture. >> One interesting piece about Kubernetes, as well. One of the things we're seeing as we see Kubernetes start to expand out is, unlike serverless, it doesn't necessarily require the same level of, oh, just take everything you've done and spend 18 months rewriting it from scratch, and then it works in this new paradigm in a better way. It's much less of a painful conversion process. We saw in the keynote today that they took WebLogic, of all things, and dropped that into Kubernetes. If you can do it with something as challenging, in some respects, and as monolithic as WebLogic, then almost any other stack you're going to see winds up making some sense. >> Right, you mentioned serverless in contrast with Kubernetes, but actually, serverless is part of this Cloud Native paradigm as well. So it's broader than Kubernetes, although Kubernetes has established itself as the container orchestration platform of choice. But it's really an overall story about how we can leverage the best practices we've learned from cloud computing across the entire enterprise IT landscape, both in the cloud and on premises. And Kubernetes is driving this in large part, but it's bigger picture than the technology itself. That's what's so interesting, because it's so transformative, but people here are thinking about trees, not the forest. >> It's an interesting thing you say there, and I'm curious if you can help our community, Because they look at this, and they're like, "Kubernetes, Kubernetes, Kubernetes." Well, a bunch of the things sit on Kubernetes. As they've tried to say, it's a platform of platforms. It's not the piece. Many of the things can be with Kubernetes but don't have to be. So, the whole observability piece. We heard the merging of the OpenCensus, OpenTracing with OpenTelemetry. You don't have to have Kubernetes for that to be a piece of it. It can be serverless underneath it. It can be all these other pieces. Cloud Native architecture sits on top of it. So when you say Cloud Native architecture, what defines that? What are the pieces? How do I have to do it? Is it just, I have to have meditated properly and had a certain sense of being? What do we have to do to be Cloud Native? >> Well, an interesting way of looking at it is: What we have subtracted from the equation, so what is intentionally missing. Cloud Native is stateless, it is codeless, and it is trustless. Now, not to say that we don't have ways of dealing with state, and of course there's still plenty of code, and we still need trust. But those are architectural principals that really percolate through everything we do. So containers are inherently stateless; they're ephemeral. Kubernetes deals with ephemeral resources that come and go as needed. This is key part of how we achieve the scale we're looking for. So now we have to deal with state in a stateless environment, and we need to do that in a codeless way. By codeless, I mean declarative. Instead of saying, how are we going to do something? Let's write code for that, we're going to say, how are we going to do that? Let's write a configuration file, a YAML file, or some other declarative representation of what we want to do. And Kubernetes is driven this way. It's driven by configuration, which means that you don't need to fork it. You don't need to go in and monkey with the insides to do something with it. It's essentially configurable and extensible, as opposed to customizable. This is a new way of thinking about how to leverage open-source infrastructure software. In the past, it was open-source. Let's go in an monkey with the code, because that's one of the benefits of open-source. Nobody wants to do that now, because it's declaratively-driven, and it's configurable. >> Okay, I hear what you're saying, and I like what you're saying. But one of the things that people say here is everyone's a little bit different, and it is not one solution. There's lots of different paths, and that's what's causing a little bit of confusion as to which service mesh, or do I have a couple of pieces that overlap. And every deployment that I see of this is slightly different, so how do I have my cake and eat it, too? >> Well, you mentioned that Kubernetes is a platform of platforms, and there's little discussion of what we're actually doing with the Kubernetes here at the show. Occasionally, there's some talk about AI, and there's some talk about a few other things, but it's really up to the users of Kubernetes, who are now the development teams in the enterprises, to figure out what they want to do with it and, as such, figure out what capabilities they require. Depending upon what applications you're running and the business use cases, you may need certain things more than others. Because AI is very different from websites, it's very different from other things you might be running. So that's part of the benefit of a platform of platforms, is it's inherently configurable. You can pick and choose the capabilities you want without having to go into Kubernetes and fork it. We don't want 12 different Kubernetes that are incompatible with each other, but we're perfectly okay with different flavors that are all based on the same, fundamental, identical code base. >> We take a look at this entire conference, and it really comes across as, yes, it's KubeCon and CloudNativeCon. We look at the, I think, 36 projects that are now being managed by this. But if we look at the conversations of what's happening here, it's very clear that the focus of this show is Kubernetes and friends, where it tends to be taking the limelight of a lot of this. One of the challenges you start seeing as soon as you start moving up the stack, out through the rest of the stack, rather, and seeing what all of these Cloud Native technologies are is, increasingly, they're starting to be defined by what they aren't. I mean, you have the old saw of, serverless runs on servers, and other incredibly unhelpful sentiments. And we talk about what things aren't more so than we do what they are. And what about capabilities story? I don't have an answer for this. I think it's one of those areas where language is hard, and defining what these things are is incredibly difficult. But I see what you're saying. We absolutely are seeing a transformative moment. And one of the strangest things about it, to me at least, is the enthusiasm with which we're seeing large enterprises, that you don't generally think of as being particularly agile or fast-moving, are demonstrating otherwise. They're diving into this in fascinating ways. It's really been enlightening to have conversations for the last couple of days with companies that are embracing this new paradigm. >> Right. Well, in our perspective at Intellyx, we're focusing on digital transformation in the enterprise, which really means putting the customer first and having a customer-driven transformation of IT, as well as the organization itself. And it's hard to think in those terms, in customer-facing terms, when you're only talking about IT infrastructure. Be that as it may, it's still all customer-driven. And this is sometimes the missing piece, is how do we connect what we're doing on the infrastructure side with what customers require from these companies that are implementing it? Often, that missing piece centers on the workload. Because, from the infrastructure perspective, we have a notion of a workload, and we want workload portability. And portability is one of the key benefits of Kubernetes. It gives us a lot of flexibility in terms of scalability and deployment options, as well as resilience and other benefits. But the workload also represents the applications we're putting in front of our end users, whether they're employees or end customers. So that's they key piece that is like the keystone that ties the digital story, that is the customer-facing, technology-driven, technology-empowered story, with the IT infrastructure stories. How do we support the flexibility, scalability, resilience of the workloads that the business needs to meet its business goals? >> Yeah, I'm really glad you brought up that digital transformation piece, because I have two questions, and I want to make sure I'm allowing you to cover both of them. One is, the outcome we from people as well: "I need to be faster, and I need to be agile." But at the same point, which pieces should I, as an enterprise, really need to manage? Many of these pieces, shouldn't I just be able to consume it as a managed service? Because I don't need to worry about all of those pieces. The Google presentation this morning about storage was: You have two options. Path one is: we'll take care of all of that for you. Path two is: here's the level of turtles that you're going to go all the way down, and we all know how complicated storage is, and it's got to work. If I lose my state, if I lose my pieces there, I'm probably out of business or at least in really big trouble. The second piece on that, you talked about the application. And digital transformation. Speed's great and everything, but we've said at Wikibon that the thing that will differentiate the traditional companies and the digitally transformed is data will drive your business. You will have data, it will add value of business, and I don't feel that story has come out yet. Do you see that as the end result from this? And apologies for having two big, complex questions here for you. >> Well, data are core to the digital transformation story, and it's also an essential part of the Kubernetes story. Although, from the infrastructure perspective, we're really thinking more about compute than about data. But of course, everything boils down to the data. That is definitely always a key part of the story. And you're talking about the different options. You could run it yourself or run it as a managed service. This is a key part of the story as well, is that it's not about making a single choice. It's about having options, and this is part of the modern cloud storage. It's not just about, "Okay, we'll put everything in one public cloud." It's about having multiple public clouds, private clouds, on-premises virtualization, as well as legacy environments. This is what you call hybrid IT. Having an abstracted collection of environments that supports workload portability in order to meet the business needs for the infrastructure. And that workload portability, in the context of multiple clouds, that is becoming increasingly dependent on Kubernetes as an essential element of the infrastructure. So Kubernetes is not the be-all and end-all, but it's become an essentially necessary part of the infrastructure, to make this whole vision of hybrid IT and digital transformation work. >> For now. I mean, I maintain that, five years from now, no one is going to care about Kubernetes. And there's two ways that goes. Either it dries up, blows away, and something else replaces it, which I don't find likely, or, more likely, it slips beneath the surface of awareness for most people. >> I would agree, yeah. >> The same way that we're not sitting here, having an in-depth conversation about which distribution of Linux, or what Linux kernel or virtual memory manager we're working with. That stuff has all slipped under the surface, to the point where there are people who care tremendously about this, but you don't need to employ them at every company. And most companies don't even have to think about it. I think Kubernetes is heading that direction. >> Yeah, it looks like it. Obviously, things continue to evolve. Yeah, Linux is a good example. TCP/IP as well. I remember the network protocol wars of the early 90s, before the web came along, and it was, "Are we going to use Banyan VINES, "are we going to use NetWare?" Remember NetWare? "Or are we going to use TCP/IP or Token Ring?" Yeah! >> Thank you. >> We could use GDP, but I don't get it. >> Come on, KOBOL's coming back, we're going to bring back Token Ring, too. >> KOBOL never went away. Token Ring, though, it's long gone. >> I am disappointed in Corey, here, for not asking the question about portability. The concern we have, as you say: okay, I put Kubernetes in here because I want portability. Do I end up with least-common-denominator cloud? I'm making a decision that I'm not going to go deep on some of the pieces, because nice as the IPI lets things through, but we understand if I need to work across multiple environments, I'm usually making a trade-off there. What do you hear from customers? Are they aware that they're doing this? Is this a challenge for people, not getting the full benefit out of whichever primary or whichever clouds they are using? >> Well, portability is not just one thing. It's actually a set of capabilities, depending upon what you are trying to accomplish. So for instance, you may want to simply support backing up your workload, so you want to be able to move it from here to there, to back it up. Or you may want to leverage different public clouds, because different public clouds have different strengths. There may be some portability there. Or you may be doing cloud migration, where you're trying to move from on-premises to cloud, so it's kind of a one-time portability. So there could be a number of reasons why portability is important, and that could impact what it means to you, to move something from here to there. And why, how often you're going to do it, how important it is, whether it's a one-to-many kind of thing, or it's a one-to-one kind of thing. It really depends on what you're trying to accomplish. >> Jason, last thing real quick. What research do you see coming out of this? What follow-up? What should people be looking for from Intellyx in this space in the near future? >> Well, we continue to focus on hybrid IT, which include Kubernetes, as well as some of the interesting trends. One of the interesting stories is how Kubernetes is increasingly being deployed on the edge. And there's a very interesting story there with edge computing, because the telcos are, in large part, driving that, because of their 5G roll-outs. So we have this interesting confluence of disruptive trends. We have 5G, we have edge computing, we have Kubernetes, and it's also a key use case for OpenStack, as well. So it's like all of these interesting trends are converging to meet a new class of challenges. And AI is part of that story as well, because we want to run AI at the edge, as well. That's the sort of thing we do at Intellyx, is try to take multiple disruptive trends and show the big picture overall. And for my articles for SiliconANGLE, that's what I'm doing as well, so stay tuned for those. >> All right. Jason Bloomberg, thank you for helping us break down what we're doing in this environment. And as you said, actually, some people said OpenStack is dead. Look, it's alive and well in the Telco space and actually merging into a lot of these environments. Nothing ever dies in IT, and theCUBE always keeps rolling throughout all the shows. For Corey Quinn, I'm Stu Miniman. We have a full-packed day of interviews here, so be sure to stay with us. And thank you for watching theCUBE. (upbeat techno music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, And to help us break down what's happening Tim Hawken, of course one of the first Kubernetes guys, and how broadly, how large the deployment. Yeah, one of the things that has excited me What's the area that you're digging down to? is a lot of discussions about the various components. One of the things we're seeing as we see Kubernetes but it's bigger picture than the technology itself. Many of the things can be with Kubernetes Now, not to say that we don't have But one of the things that people say here is You can pick and choose the capabilities you want One of the challenges you start seeing And portability is one of the key benefits of Kubernetes. One is, the outcome we from people as well: of the infrastructure, to make this whole vision beneath the surface of awareness for most people. And most companies don't even have to think about it. I remember the network protocol wars of the early 90s, we're going to bring back Token Ring, too. KOBOL never went away. because nice as the IPI lets things through, and that could impact what it means to you, What research do you see coming out of this? That's the sort of thing we do at Intellyx, And as you said, actually,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim HawkenPERSON

0.99+

JasonPERSON

0.99+

SeattleLOCATION

0.99+

Corey QuinnPERSON

0.99+

Stu MinimanPERSON

0.99+

Brian LilesPERSON

0.99+

Jason BloombergPERSON

0.99+

12QUANTITY

0.99+

BarcelonaLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

two questionsQUANTITY

0.99+

five yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

DecemberDATE

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

secondQUANTITY

0.99+

CERNORGANIZATION

0.99+

36 projectsQUANTITY

0.99+

20QUANTITY

0.99+

TimPERSON

0.99+

IntellyxORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

two waysQUANTITY

0.99+

second pieceQUANTITY

0.99+

OneQUANTITY

0.99+

two daysQUANTITY

0.99+

7,700QUANTITY

0.99+

KubeConEVENT

0.99+

two optionsQUANTITY

0.99+

KOBOLORGANIZATION

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

yesterdayDATE

0.98+

one solutionQUANTITY

0.98+

LinuxTITLE

0.98+

GoogleORGANIZATION

0.98+

todayDATE

0.97+

KubernetesTITLE

0.97+

early 90sDATE

0.97+

Cloud NativeTITLE

0.96+

WikibonORGANIZATION

0.96+

more than halfQUANTITY

0.96+

this morningDATE

0.95+

CloudNativeCon Europe 2019EVENT

0.95+

one thingQUANTITY

0.95+

WebLogicTITLE

0.94+

first oneQUANTITY

0.94+

One interesting pieceQUANTITY

0.93+

Path oneQUANTITY

0.93+

single choiceQUANTITY

0.93+

this afternoonDATE

0.92+

CloudNativeCon 2019EVENT

0.92+

Path twoQUANTITY

0.92+

one of the boothsQUANTITY

0.92+

next six monthsDATE

0.91+

Linux kernelTITLE

0.9+

two bigQUANTITY

0.89+