Image Title

Search Results for SLO:

Anish Dhar & Ganesh Datta, Cortex | Kubecon + Cloudnativecon Europe 2022


 

>> Narrator: TheCUBE presents Kubecon and Cloudnativecon Europe, 2022. Brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend and we are in a beautiful locale. The city itself is not that big, 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company and I think that's really the philosophy we have. When we built Cortex, is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. The engineers who write the services they're often the ones who understand all the intricacies with the service. What are the downstream dependencies, who's on call for this service? Where does the documentation live? All of these things I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively codify your best practices in some way and have that distributed to all engineers in the company. >> And to add to that, I think very concrete examples for an organization that's already modern like their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century, they're trying to get to Kubernetes. For them, quality means where are we in that journey? Are you on our latest platforms? Are you running CI, are you doing continuous delivery? Like quality can mean a lot of things and so our perspective is how do we give you the tools to say as an organization, here's what quality means to us. >> So at first, my mind was going through when you said quality, Anish, you started out the conversation about having this kind of non-codified set of measurements, historical knowledge, et cetera. I was thinking observability, measuring how much time does it take to have a transaction. But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff. And I think that's very high level and I think that's one perspective of what quality is, but as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> I'm kind of taken back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCUBE clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... What's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version, and then what? I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different sets of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. Like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where the areas of risk with my service across reliability or security or operation maturity. I think it gives us in insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standards for SLO that is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability or reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my API integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs, we want you to define your thresholds. We want you to be tracking them, are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be, ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you got to be tracking them. And the service catalog that data flows from a service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, you can say like, hey, for this service, we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans. Here's the APM, the monitors, the SLOs, the JIRA ticket is like all that stuff comes into a single place. And then our scorecards product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah, and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them. And scorecards can be used to one, which teams are actually meeting these guidelines? And then two, let's get those teams adopted on SLOs. Let's track that, you can do all of that in Cortex, which is I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering if not, you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define this scorecard. And that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful and those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service, you can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices, so I reduce call wait time on my customer service calls. So I can easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard, but what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run books to quickly roll back to an older version that we know is running. That way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind of the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics. And then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics like are we doing things the right way, versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely, I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock? >> Time for the Q clock. Start the Q clock. (upbeat music) Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on. You're you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this. For a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my engineering team actually producing. >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that can- >> A goal without a timeline is just a dream. >> And CTO comes in and they say, here's what we care about. Here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's where Cortex can provide. We can give you that visibility that you never have before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube. The leader in high tech coverage. (soft music) (soft music) >> Narrator: TheCube presents Kubecon and Cloudnativecon Europe, 2022 brought to you by Red Hat, the cloud native computing foundation and its ecosystem partners. >> Welcome to Valencia, Spain in Kubecon, Cloudnativecon Europe, 2022. I'm Keith Townsend. And we are in a beautiful locale. The city itself is not that big 100,000, I mean, sorry, about 800,000 people. And we got out, got to see a little bit of the sites. It is an amazing city. I'm from the US, it's hard to put in context how a city of 800,000 people can be so beautiful. I'm here with Anish Dhar and Ganesh Datta, Co-founder and CTO of Cortex. Anish you're CEO of Cortex. We were having a conversation. One of the things that I asked my client is what is good. And you're claiming to answer the question about what is quality when it comes to measuring microservices? What is quality? >> Yeah, I think it really depends on the company. And I think that's really the philosophy we have when we build Cortex is that we understood that different companies have different definitions of quality, but they need to be able to be represented in really objective ways. I think what ends up happening in most engineering organizations is that quality lives in people's heads. Engineers who write the services, they're often the ones who understand all the intricacies with the service. What are the downstream I dependencies, who's on call for this service, where does the documentation live? All of these things, I think impact the quality of the service. And as these engineers leave the company or they switch teams, they often take that tribal knowledge with them. And so I think quality really comes down to being able to objectively like codify your best practices in some way, and have that distributed to all engineers in the company. >> And to add to that, I think like very concrete examples for an organization that's already modern their idea of quality might be uptime incidents. For somebody that's like going through a modernization strategy, they're trying to get to the 21st century. They're trying to get to Kubernetes. For them quality means like, where are we in that journey? Are you on our latest platforms? Are you running CI? Are you doing continuous delivery? Like quality can mean a lot of things. And so our perspective is how do we give you the tools to say as an organization here's what quality means to us. >> So at first my mind was going through when you said quality and as you started out the conversation about having this kind of non codified set of measurements, historical knowledge, et cetera. I was thinking observability measuring how much time does it take to have a transaction? But Ganesh you're introducing this new thing. I'm working with this project where we're migrating a monolith application to a set of microservices. And you're telling me Cortex helps me measure the quality of what I'm doing in my project? >> Ganesh: Absolutely. >> How is that? >> Yeah, it's a great question. So I think when you think about observability, you think about uptime and latency and transactions and throughput and all this stuff and I think that's very high level. And I think that's one perspective of what quality is. But as you're going through this journey, you might say like the fact that we're tracking that stuff, the fact that you're using APM, you're using distributed tracing, that is one element of service quality. Maybe service quality means you're doing CICD, you're running vulnerability scans. You're using Docker. Like what that means to us can be very different. So observability is just one aspect of, are you doing things the right way? Good to us means you're using SLOs. You are tracking those metrics. You're reporting that somewhere. And so that's like one component for our organization of what quality can mean. >> Wow, I'm kind of taken me back by this because I've not seen someone kind of give the idea. And I think later on, this is the perfect segment to introduce theCube clock in which I'm going to give you a minute to kind of like give me the elevator pitch, but we're going to have the deep conversation right now. When you go in and you... what's the first process you do when you engage in a customer? Does a customer go and get this off of repository, install it, the open source version and then what, I mean, what's the experience? >> Yeah, absolutely. So we have both a smart and on-prem version of Cortex. It's really straightforward. Basically we have a service discovery onboarding flow where customers can connect to different set of source for their services. It could be Kubernetes, ECS, Git Repos, APM tools, and then we'll actually automatically map all of that service data with all of the integration data in the company. So we'll take that service and map it to its on call rotation to the JIRA tickets that have the service tag associated with it, to the data algo SLOs. And what that ends up producing is this service catalog that has all the information you need to understand your service. Almost like a single pane of glass to work with the service. And then once you have all of that data inside Cortex, then you can start writing scorecards, which grade the quality of those services across those different verticals Ganesh was talking about. like whether it's a monolith, a microservice transition, whether it's production readiness or security standards, you can really start tracking that. And then engineers start understanding where are the areas of risk with my service across reliability or security or operation maturity. I think it gives us insane visibility into what's actually being built and the quality of that compared to your standards. >> So, okay, I have a standard for SLO. That is usually something that is, it might not even be measured. So how do you help me understand that I'm lacking a measurable system for tracking SLO and what's the next step for helping me get that system? >> Yeah, I think our perspective is very much how do we help you create a culture where developers understand what's expected of them? So if SLOs are part of what we consider observability and reliability, then Cortex's perspective is, hey, we want to help your organization adopt SLOs. And so that service cataloging concept, the service catalog says, hey, here's my APM integration. Then a scorecard, the organization goes in and says, we want every service owner to define their SLOs. We want to define your thresholds. We want you to be tracking them. Are you passing your SLOs? And so we're not being prescriptive about here's what we think your SLOs should be. Ours is more around, hey, we're going to help you like if you care about SLOs, we're going to tell the service owners saying, hey, you need to have at least two SLOs for your service and you've got to be tracking them. And the service catalog that data flows from the service catalog into those scorecards. And so we're helping them adopt that mindset of, hey, SLOs are important. It is a component of like a holistic service reliability excellence metric that we care about. >> So what happens when I already have systems for like SLO, how do I integrate that system with Cortex? >> That's one of the coolest things. So the service catalog can be pretty smart about it. So let's say you've sucked in your services from your GitHub. And so now your services are in Cortex. What we can do is we can actually discover from your APM tools, we can say like, hey, for this service we have guessed that this is the corresponding APM in Datadog. And so from Datadog, here are your SLOs, here are your monitors. And so we can start mapping all the different parts of your world into the Cortex. And that's the power of the service catalog. The service catalog says, given a service, here's everything about that service. Here's the vulnerability scans, here's the APM, the monitor, the SLOs, the JIRA ticket, like all that stuff comes into a single place. And then our scorecard product can go back out and say, hey, Datadog, tell me about this SLOs for the service. And so we're going to get that information live and then score your services against that. And so we're like integrating with all of your third party tools and integrations to create that single pan of glass. >> Yeah and to add to that, I think one of the most interesting use cases with scorecards is, okay, which teams have actually adopted SLOs in the first place? I think a lot of companies struggle with how do we make sure engineers defined SLOs are passing them actually care about them? And scorecards can be used to one, which teams are actually meeting these guidelines? And then two let's get those teams adopted on SLOs. Let's track that. You can do all of that in Cortex, which is, I think a really interesting use case that we've seen. >> So let's talk about kind of my use case in the end to end process for integrating Cortex into migrations. So I have this monolithic application, I want to break it into microservices and then I want to ensure that I'm delivering you know what, let's leave it a little bit more open ended. How do I know that I'm better at the end of I was in a monolith before, how do I measure that now that I'm in microservices and on cloud native, that I'm better? >> That's a good question. I think it comes down to, and we talk about this all the time for our customers that are going through that process. You can't define better if you don't define a baseline, like what does good mean to us? And so you need to start by saying, why are we moving to microservices? Is it because we want teams to move faster? Is it because we care about reliability up time? Like what is the core metric that we're tracking? And so you start by defining that as an organization. And that is kind of like a hand wavy thing. Why are we doing microservices? Once you have that, then you define the scorecard and that's like our golden path. Once we're done doing this microservice migration, can we say like, yes, we have been successful. And like those metrics that we care about are being tracked. And so where Cortex fits in is from the very first step of creating a service. You can use Cortex to define templates. Like one click, you go in, it spins up a microservice for you that follows all your best practices. And so from there, ideally you're meeting 80% of your standards already. And then you can use scorecards to track historical progress. So you can say, are we meeting our golden path standards? Like if it's uptime, you can track uptime metrics and scorecards. If it's around velocity, you can track velocity metrics. Is it just around modernization? Are you doing CICD and vulnerability scans, like moving faster as a team? You can track that. And so you can start seeing like trends at a per team level, at a per department level, at a per product level. Saying, hey, we are seeing consistent progress in the metrics that we care about. And this microservice journey is helping us with that. So I think that's the kind of phased progress that we see with Cortex. >> So I'm going to give you kind of a hand wavy thing. We're told that cloud native helps me to do things faster with less defects so that I can do new opportunities. Let's stretch into kind of this non-tech, this new opportunities perspective. I want to be able to move my microservices. I want to be able to move my architecture to microservices so I reduce call wait time on my customer service calls. So, I could easily see how I can measure are we iterating faster? Are we putting out more updates quicker? That's pretty easy to measure. The number of defects, easy to measure. I can imagine a scorecard. But what about this wait time? I don't necessarily manage the call center system, but I get the data. How do I measure that the microservice migration was successful from a business process perspective? >> Yeah, that's a good question. I think it comes down to two things. One, the flexibility of scorecard means you can pipe in that data to Cortex. And what we recommend customers is track the outcome metrics and track the input metrics as well. And so what is the input metric to call wait time? Like maybe it's the fact that if something goes wrong, we have the run book to quickly roll back to an older version that we know is running that way MTTR is faster. Or when something happens, we know the owner for that service and we can go back to them and say like, hey, we're going to ping you as an incident commander. Those are kind the input metrics to, if we do these things, then we know our call wait time is going to drop because we're able to respond faster to incidents. And so you want to track those input metrics and then you want to track the output metrics as well. And so if you have those metrics coming in from your Prometheus or your Datadogs or whatever, you can pipe that into Cortex and say, hey, we're going to look at both of these things holistically. So we want to see is there a correlation between those input metrics? Are we doing things the right way versus are we seeing the value that we want to come out of that? And so I think that's the value of Cortex is not so much around, hey, we're going to be prescriptive about it. It's here's this framework that will let you track all of that and say, are we doing things the right way and is it giving us the value that we want? And being able to report that update to engineer leadership and say, hey, maybe these services are not doing like we're not improving call wait time. Okay, why is that? Are these services behind on like the actual input metrics that we care about? And so being able to see that I think is super valuable. >> Yeah, absolutely. I think just to touch on the reporting, I think that's one of the most value add things Cortex can provide. If you think about it, the service is atomic unit of your software. It represents everything that's being built and that bubbles up into teams, products, business units, and Cortex lets you represent that. So now I can, as a CTO, come in and say, hey, these product lines are they actually meeting our standards? Where are the areas of risk? Where should I be investing more resources? I think Cortex is almost like the best way to get the actual health of your engineering organization. >> All right, Anish and Ganesh. We're going to go into the speed round here. >> Ganesh: It's time for the Q clock >> Time for the Q clock. Start the Q clock. (upbeat music) >> Let's go on. >> Ganesh: Let's do it. >> Anish: Let's do it. >> Let's go on, you're 10 seconds in. >> Oh, we can start talking. Okay, well I would say, Anish was just touching on this, for a CTO, their question is how do I know if engineering quality is good? And they don't care about the microservice level. They care about as a business, is my enduring team actually producing- >> Keith: Follow the green, not the dream. (Ganesh laughs) >> And so the question is, well, how do we codify service quality? We don't want this to be a hand wavy thing that says like, oh, my team is good, my team is bad. We want to come in and define here's what service quality means. And we want that to be a number. You want that to be something that you can- >> A goal without a timeline is just a dream. >> And a CTO comes in and they say, here's what we care about, here's how we're tracking it. Here are the teams that are doing well. We're going to reward the winners. We're going to move towards a world where every single team is doing service quality. And that's what Cortex can provide. We can give you that visibility that you never had before. >> For that five seconds. >> And hey, your SRE can't be the one handling all this. So let Cortex- >> Shoot the bad guy. >> Shot that, we're done. From Valencia Spain, I'm Keith Townsend. And you're watching theCube, the leader in high tech coverage. (soft music)

Published Date : May 20 2022

SUMMARY :

Brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think What's the first process you do that has all the information you need So how do you help me we want you to define your thresholds. And so we can start mapping adopted SLOs in the first place? in the end to end process And so you can start seeing like trends So I'm going to give you And so if you have those metrics coming in and Cortex lets you represent that. the speed round here. Time for the Q clock. You're you're 10 seconds in. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never have before. And hey, your SRE can't And you're watching theCube. 2022 brought to you by Red Hat, And we got out, got to see and have that distributed to how do we give you the tools the quality of what I'm So I think when you think And I think later on, this that has all the information you need So how do you help me And the service catalog that data flows And so we can start mapping You can do all of that in the end to end process And so you can start seeing So I'm going to give you And so if you have those metrics coming in I think just to touch on the reporting, the speed round here. Time for the Q clock. the microservice level. Keith: Follow the green, not the dream. And so the question is, well, timeline is just a dream. that you never had before. And hey, your SRE can't And you're watching theCube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnishPERSON

0.99+

Keith TownsendPERSON

0.99+

CortexORGANIZATION

0.99+

80%QUANTITY

0.99+

KeithPERSON

0.99+

Red HatORGANIZATION

0.99+

USLOCATION

0.99+

GaneshPERSON

0.99+

21st centuryDATE

0.99+

100,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

twoQUANTITY

0.99+

five secondsQUANTITY

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.99+

Valencia, SpainLOCATION

0.99+

800,000 peopleQUANTITY

0.99+

CortexTITLE

0.99+

Valencia SpainLOCATION

0.99+

one elementQUANTITY

0.99+

one aspectQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

CloudnativeconORGANIZATION

0.99+

one perspectiveQUANTITY

0.99+

DatadogORGANIZATION

0.99+

one componentQUANTITY

0.99+

Ganesh DattaPERSON

0.98+

OneQUANTITY

0.98+

SLOTITLE

0.98+

2022DATE

0.98+

first stepQUANTITY

0.98+

KubeconORGANIZATION

0.97+

about 800,000 peopleQUANTITY

0.97+

one clickQUANTITY

0.97+

Matt Provo & Patrick Bergstrom, StormForge | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to Melissa Spain. And we're at cuon cloud native con Europe, 2022. I'm Keith Townsend. And my co-host en Rico senior Etti en Rico's really proud of me. I've called him en Rico and said IK, every session, senior it analyst giga, O we're talking to fantastic builders at Cuban cloud native con about the projects and the efforts en Rico up to this point, it's been all about provisioning insecurity. What, what conversation have we been missing? >>Well, I mean, I, I think, I think that, uh, uh, we passed the point of having the conversation of deployment of provisioning. You know, everybody's very skilled, actually everything is done at day two. They are discovering that, well, there is a security problem. There is an observability problem. And in fact, we are meeting with a lot of people and there are a lot of conversation with people really needing to understand what is happening. I mean, in their classroom, what, why it is happening and all the, the questions that come with it. I mean, and, uh, the more I talk with, uh, people in the, in the show floor here, or even in the, you know, in the various sessions is about, you know, we are growing, the, our clusters are becoming bigger and bigger. Uh, applications are becoming, you know, bigger as well. So we need to know, understand better what is happening. It's not only, you know, about cost it's about everything at the >>End. So I think that's a great set up for our guests, max, Provo, founder, and CEO of storm for forge and Patrick Britton, Bergstrom, Brookstone. Yeah, I spelled it right. I didn't say it right. Berg storm CTO. We're at Q con cloud native con we're projects are discussed, built and storm forge. I I've heard the pitch before, so forgive me. And I'm, I'm, I'm, I'm, I'm, I'm kind of torn. I have service mesh. What do I need more like, what problem is storm for solving? >>You wanna take it? >>Sure, absolutely. So it it's interesting because, uh, my background is in the enterprise, right? I was an executive at United health group. Um, before that I worked at best buy. Um, and one of the issues that we always had was, especially as you migrate to the cloud, it seems like the CPU dial or the memory dial is your reliability dial. So it's like, oh, I just turned that all the way to the right and everything's hunky Dory. Right. Uh, but then we run into the issue like you and I were just talking about where it gets very, very expensive, very quickly. Uh, and so my first conversations with Matt and the storm forge group, and they were telling me about the product and, and what we're dealing with. I said, that is the problem statement that I have always struggled with. And I wish this existed 10 years ago when I was dealing with EC two costs, right? And now with Kubernetes, it's the same thing. It's so easy to provision. So realistically, what it is is we take your raw telemetry data and we essentially monitor the performance of your application. And then we can tell you using our machine learning algorithms, the exact configuration that you should be using for your application to achieve the results that you're looking for without over provisioning. So we reduce your consumption of CPU of memory and production, which ultimately nine times outta 10, actually I would say 10 out of 10 reduces your cost significantly without sacrificing reliability. >>So can your solution also help to optimize the application in the long run? Because yes, of course, yep. You know, the lowing fluid is, you know, optimize the deployment. Yeah. But actually the long term is optimizing the application. Yes. Which is the real problem. >>Yep. So we actually, um, we're fine with the, the former of what you just said, but we exist to do the latter. And so we're squarely and completely focused at the application layer. Um, we are, uh, as long as you can track or understand the metrics you care about for your application, uh, we can optimize against it. Um, we love that we don't know your application. We don't know what the SLA and SLO requirements are for your app. You do. And so in, in our world, it's about empowering the developer into the process, not automating them out of it. And I think sometimes AI and machine learning sort of gets a bad wrap from that standpoint. And so, uh, we've at this point, the company's been around, you know, since 2016, uh, kind of from the very early days of Kubernetes, we've always been, you know, squarely focused on Kubernetes using our core machine learning, uh, engine to optimize metrics at the application layer, uh, that people care about and, and need to need to go after. And the truth of the matter is today. And over time, you know, setting a cluster up on Kubernetes has largely been solved. Um, and yet the promise of, of Kubernetes around portability and flexibility, uh, downstream when you operationalize the complexity, smacks you in the face. And, uh, and that's where, where storm forge comes in. And so we're a vertical, you know, kind of vertically oriented solution. Um, that's, that's absolutely focused on solving that problem. >>Well, I don't want to play, actually. I want to play the, uh, devils advocate here and, you know, >>You wouldn't be a good analyst if you didn't. >>So the, the problem is when you talk with clients, users, they, there are many of them still working with Java with, you know, something that is really tough. Mm-hmm <affirmative>, I mean, we loved all of us loved Java. Yeah, absolutely. Maybe 20 years ago. Yeah. But not anymore, but still they have developers. They are porting applications, microservices. Yes. But not very optimized, etcetera. C cetera. So it's becoming tough. So how you can interact with these kind of yeah. Old hybrid or anyway, not well in generic applications. >>Yeah. We, we do that today. We actually, part of our platform is we offer performance testing in a lower environment and stage. And we like Matt was saying, we can use any metric that you care about and we can work with any configuration for that application. So the perfect example is Java, you know, you have to worry about your heap size, your garbage collection tuning. Um, and one of the things that really struck, struck me very early on about the storm forage product is because it is true machine learning. You remove the human bias from that. So like a lot of what I did in the past, especially around SRE and, and performance tuning, we were only as good as our humans were because of what they knew. And so we were, we kind of got stuck in these paths of making the same configuration adjustments, making the same changes to the application, hoping for different results. But then when you apply machine learning capability to that, the machine will recommend things you never would've dreamed of. And you get amazing results out of >>That. So both me and an Rico have been doing this for a long time. Like I have battled to my last breath, the, the argument when it's a bare metal or a VM. Yeah. Look, I cannot give you any more memory. Yeah. And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, Keith you're cheap, my developer resources expensive, my bigger box. Yep. Uh, buying a bigger box in the cloud to your point is no longer a option because it's just expensive. Talk to me about the carrot or the stick as developers are realizing that they have to be more responsible. Where's the culture change coming from? So is it, that is that if it, is it the shift in responsibility? >>I think the center of the bullseye for us is within those sets of decisions, not in a static way, but in an ongoing way, especially, um, especially as the development of applications becomes more and more rapid. And the management of them, our, our charge and our belief wholeheartedly is that you shouldn't have to choose, you should not have to choose between costs or performance. You should not have to choose where your, you know, your applications live, uh, in a public private or, or hybrid cloud environment. And so we want to empower people to be able to sit in the middle of all of that chaos and for those trade-offs and those difficult interactions to no, no longer be a thing. You know, we're at, we're at a place now where we've done, you know, hundreds of deployments and never once have we met a developer who said, I'm really excited to get outta bed and come to work every day and manually tune my application. <laugh> One side, secondly, we've never met, uh, you know, uh, a manager or someone with budget that said, uh, please don't, you know, increase the value of my investment that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, or some combination of both. And so what we're seeing is the converging of these groups, um, at, you know, their happy place is the lack of needing to be able to, uh, make those trade offs. And that's been exciting for us. So, >>You know, I'm listening and looks like that your solution is right in the middle in application per performance management, observability. Yeah. And, uh, and monitoring. So it's a little bit of all of this. >>So we, we, we, we want to be, you know, the Intel inside of all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. It used to be APM a lot. We sometimes get a, are you observability or, and we're really not any of those things in and of themselves, but we, instead of invested in deep integrations and partnerships with a lot of those, uh, with a lot of that tooling, cuz in a lot of ways, the, the tool chain is hardening, uh, in a cloud native and, and Kubernetes world. And so, you know, integrating in intelligently staying focused and great at what we solve for, but then seamlessly partnering and not requiring switching for, for our users who have already invested likely in a APM or observability. >>So to go a little bit deeper. Sure. What does it mean integration? I mean, do you provide data to this, you know, other applications in, in the environment or are they supporting you in the work that you >>Yeah, we're, we're a data consumer for the most part. Um, in fact, one of our big taglines is take your observability and turn it into actionability, right? Like how do you take the it's one thing to collect all of the data, but then how do you know what to do with it? Right. So to Matt's point, um, we integrate with folks like Datadog. Um, we integrate with Prometheus today. So we want to collect that telemetry data and then do something useful with it for you. >>But, but also we want Datadog customers. For example, we have a very close partnership with, with Datadog, so that in your existing data dog dashboard, now you have yeah. This, the storm for capability showing up in the same location. Yep. And so you don't have to switch out. >>So I was just gonna ask, is it a push pull? What is the developer experience? When you say you provide developer, this resolve ML, uh, learnings about performance mm-hmm <affirmative> how do they receive it? Like what, yeah, what's the, what's the, what's the developer experience >>They can receive it. So we have our own, we used to for a while we were CLI only like any good developer tool. Right. Uh, and you know, we have our own UI. And so it is a push in that, in, in a lot of cases where I can come to one spot, um, I've got my applications and every time I'm going to release or plan for a release or I have released, and I want to take, pull in, uh, observability data from a production standpoint, I can visualize all of that within the storm for UI and platform, make decisions. We allow you to, to set your, you know, kind of comfort level of automation that you're, you're okay with. You can be completely set and forget, or you can be somewhere along that spectrum. And you can say, as long as it's within, you know, these thresholds, go ahead and release the application or go ahead and apply the configuration. Um, but we also allow you to experience, uh, the same, a lot of the same functionality right now, you know, in Grafana in Datadog, uh, and a bunch of others that are coming. >>So I've talked to Tim Crawford who talks to a lot of CIOs and he's saying one of the biggest challenges, or if not, one of the biggest challenges CIOs are facing are resource constraints. Yeah. They cannot find the developers to begin with to get this feedback. How are you hoping to address this biggest pain point for CIOs? Yeah. >>Development? >>Just take that one. Yeah, absolutely. That's um, so like my background, like I said, at United health group, right. It's not always just about cost savings. In fact, um, the way that I look about at some of these tech challenges, especially when we talk about scalability, there's kind of three pillars that I consider, right? There's the tech scalability, how am I solving those challenges? There's the financial piece, cuz you can only throw money at a problem for so long. And it's the same thing with the human piece. I can only find so many bodies and right now that pool is very small. And so we are absolutely squarely in that footprint of, we enable your team to focus on the things that they matter, not manual tuning like Matt said. And then there are other resource constraints that I think that a lot of folks don't talk about too. >>Like we were, you were talking about private cloud for instance. And so having a physical data center, um, I've worked with physical data centers that companies I've worked for have owned where it is literally full wall to wall. You can't rack any more servers in it. And so their biggest option is, well, I could spend 1.2 billion to build a new one if I wanted to. Or if you had a capability to truly optimize your compute to what you needed and free up 30% of your capacity of that data center. So you can deploy additional name spaces into your cluster. Like that's a huge opportunity. >>So either out of question, I mean, may, maybe it, it doesn't sound very intelligent at this point, but so is it an ongoing process or is it something that you do at the very beginning mean you start deploying this. Yeah. And maybe as a service. Yep. Once in a year I say, okay, let's do it again and see if something changes. Sure. So one spot 1, 1, 1 single, you know? >>Yeah. Um, would you recommend somebody performance tests just once a year? >>Like, so that's my thing is, uh, previous at previous roles I had, uh, my role was you performance test, every single release. And that was at a minimum once a week. And if your thing did not get faster, you had to have an executive exception to get it into production. And that's the space that we wanna live in as well as part of your C I C D process. Like this should be continuous verification every time you deploy, we wanna make sure that we're recommending the perfect configuration for your application in the name space that you're deploying >>Into. And I would be as bold as to say that we believe that we can be a part of adding, actually adding a step in the C I C D process that's connected to optimization and that no application should be released monitored and sort of, uh, analyzed on an ongoing basis without optimization being a part of that. And again, not just from a cost perspective, yeah. Cost end performance, >>Almost a couple of hundred vendors on this floor. You know, you mentioned some of the big ones, data, dog, et cetera. But what happens when one of the up and comings out of nowhere, completely new data structure, some imaginable way to click to elementry data. Yeah. How do, how do you react to that? >>Yeah. To us it's zeros and ones. Yeah. Uh, and you know, we're, we're, we're really, we really are data agnostic from the standpoint of, um, we're not, we we're fortunate enough to, from the design of our algorithm standpoint, it doesn't get caught up on data structure issues. Um, you know, as long as you can capture it and make it available, uh, through, you know, one of a series of inputs, what one, one would be load or performance tests, uh, could be telemetry, could be observability if we have access to it. Um, honestly the messier, the, the better from time to time, uh, from a machine learning standpoint, um, it, it, it's pretty powerful to see we've, we've never had a deployment where we, uh, where we saved less than 30% while also improving performance by at least 10%. But the typical results for us are 40 to 60% savings and, you know, 30 to 40% improvement in performance. >>And what happens if the application is, I, I mean, yes, Kubernetes is the best thing of the world, but sometimes we have to, you know, external data sources or, or, you know, we have to connect with external services anyway. Mm-hmm <affirmative> yeah. So can you, you know, uh, can you provide an indication also on, on, on this particular application, like, you know, where the problem could >>Be? Yeah, yeah. And that, that's absolutely one of the things that we look at too, cuz it's um, especially when you talk about resource consumption, it's never a flat line, right? Like depending on your application, depending on the workloads that you're running, um, it varies from sometimes minute to minute, day to day, or it could be week to week even. Um, and so especially with some of the products that we have coming out with what we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some of those bumps and not necessarily bumps, but bursts and being able to do it in a way that's intelligent so that we can make sure that, like I said, it's the perfect configuration for the application regardless of the time of day that you're operating in or what your traffic patterns look like. Um, or you know, what your disc looks like, right? Like cuz with our, our low environment testing, any metric you throw at us, we can, we can optimize for. >>So Madden Patrick, thank you for stopping by. Yeah. Yes. We can go all day. Because day two is I think the biggest challenge right now. Yeah. Not just in Kubernetes, but application replatforming and re and transformation. Very, very difficult. Most CTOs and S that I talked to, this is the challenge space from Valencia Spain. I'm Keith Townsend, along with my host en Rico senior. And you're watching the queue, the leader in high tech coverage.

Published Date : May 18 2022

SUMMARY :

brought to you by the cloud native computing foundation. And we're at cuon cloud native you know, in the various sessions is about, you know, we are growing, I I've heard the pitch before, and one of the issues that we always had was, especially as you migrate to the cloud, You know, the lowing fluid is, you know, optimize the deployment. And so we're a vertical, you know, devils advocate here and, you know, So the, the problem is when you talk with clients, users, So the perfect example is Java, you know, you have to worry about your heap size, And the, the argument going all the way up to the CIO and the CIO basically saying, you know what, that I've made to lift and shift us over mm-hmm <affirmative>, you know, to the cloud or to Kubernetes or, You know, I'm listening and looks like that your solution is right in the middle in all of that, mm-hmm, <affirmative>, we don't, you know, we often get lumped into one of those categories. this, you know, other applications in, in the environment or are they supporting Like how do you take the it's one thing to collect all of the data, And so you don't have to switch out. Um, but we also allow you to experience, How are you hoping to address this And it's the same thing with the human piece. Like we were, you were talking about private cloud for instance. is it something that you do at the very beginning mean you start deploying this. And that's the space that we wanna live in as well as part of your C I C D process. actually adding a step in the C I C D process that's connected to optimization and that no application You know, you mentioned some of the big ones, data, dog, Um, you know, as long as you can capture it and make it available, or, you know, we have to connect with external services anyway. we want to do, you know, partnering with, uh, you know, integrating heavily with the HPA and being able to handle some So Madden Patrick, thank you for stopping by.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim CrawfordPERSON

0.99+

Keith TownsendPERSON

0.99+

30QUANTITY

0.99+

40QUANTITY

0.99+

1.2 billionQUANTITY

0.99+

MattPERSON

0.99+

Matt ProvoPERSON

0.99+

DatadogORGANIZATION

0.99+

storm for forgeORGANIZATION

0.99+

Patrick BergstromPERSON

0.99+

2016DATE

0.99+

JavaTITLE

0.99+

10QUANTITY

0.99+

Melissa SpainPERSON

0.99+

nine timesQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

40%QUANTITY

0.99+

less than 30%QUANTITY

0.99+

10 years agoDATE

0.98+

United health groupORGANIZATION

0.98+

bothQUANTITY

0.98+

20 years agoDATE

0.98+

oneQUANTITY

0.98+

KeithPERSON

0.98+

once a yearQUANTITY

0.98+

once a weekQUANTITY

0.98+

HPAORGANIZATION

0.98+

2022DATE

0.98+

CoonORGANIZATION

0.98+

30%QUANTITY

0.98+

first conversationsQUANTITY

0.97+

CloudnativeconORGANIZATION

0.97+

60%QUANTITY

0.97+

KubernetesTITLE

0.97+

EttiPERSON

0.97+

todayDATE

0.96+

Patrick BrittonPERSON

0.96+

KubeconORGANIZATION

0.96+

StormForgeORGANIZATION

0.95+

data dogORGANIZATION

0.94+

PrometheusTITLE

0.94+

three pillarsQUANTITY

0.94+

secondlyQUANTITY

0.94+

RicoORGANIZATION

0.93+

Q con cloudORGANIZATION

0.93+

hundreds of deploymentsQUANTITY

0.92+

day twoQUANTITY

0.92+

EuropeLOCATION

0.92+

KubernetesORGANIZATION

0.92+

IntelORGANIZATION

0.92+

one spotQUANTITY

0.89+

at least 10%QUANTITY

0.87+

one thingQUANTITY

0.85+

hundred vendorsQUANTITY

0.83+

Once in a yearQUANTITY

0.83+

cuon cloud native conORGANIZATION

0.81+

RicoLOCATION

0.81+

BrookstoneORGANIZATION

0.8+

GrafanaORGANIZATION

0.8+

Berg storm CTOORGANIZATION

0.8+

SRETITLE

0.79+

SLATITLE

0.79+

BergstromORGANIZATION

0.79+

cloud native conORGANIZATION

0.78+

single releaseQUANTITY

0.77+

storm forge groupORGANIZATION

0.75+

1QUANTITY

0.75+

One sideQUANTITY

0.74+

EC twoTITLE

0.74+

1 singleQUANTITY

0.74+

PatrickPERSON

0.74+

Clemence W. Chee & Christoph Sawade, HelloFresh


 

(upbeat music) >> Hello everyone. We're here at theCUBE startup showcase made possible by AWS. Thanks so much for joining us today. You know, when Zhamak Dehghani was formulating her ideas around data mesh, she wasn't the only one thinking about decentralized data architectures. HelloFresh was going into hyper-growth mode and realized that in order to support its scale, it needed to rethink how it thought about data. Like many companies that started in the early part of the last decade, HelloFresh relied on a monolithic data architecture and the internal team it had concerns about its ability to support continued innovation at high velocity. The company's data team began to think about the future and work backwards from a target architecture, which possessed many principles of so-called data mesh, even though they didn't use that term specifically. The company is a strong example of an early but practical pioneer of data mesh. Now, there are many practitioners and stakeholders involved in evolving the company's data architecture many of whom are listed here on this slide. Two are highlighted in red and joining us today. We're really excited to welcome you to theCUBE, Clemence Chee, who is the global senior director for data at HelloFresh, and Christoph Sawade, who's the global senior director of data also of course at HelloFresh. Folks, welcome. Thanks so much for making some time today and sharing your story. >> Thank you very much. >> Thanks, Dave. >> All right, let's start with HelloFresh. You guys are number one in the world in your field. You deliver hundreds of millions of meals each year to many, many millions of people around the globe. You're scaling. Christoph, tell us a little bit more about your company and its vision. >> Yeah. Should I start or Clemence? Maybe take over the first piece because Clemence has actually been longer a director at HelloFresh. >> Yeah go ahead Clemence. >> I mean, yes, about approximately six years ago I joined and HelloFresh, and I didn't think about the startup I was joining would eventually IPO. And just two years later, HelloFresh went public. And approximately three years and 10 months after HelloFresh was listed on the German stock exchange which was just last week, HelloFresh was included in the DAX Germany's leading stock market index and that, to mind a great, great milestone, and I'm really looking forward and I'm very excited for the future for HelloFresh and also our data. The vision that we have is to become the world's leading food solution group. And there are a lot of attractive opportunities. So recently we did launch and expand in Norway. This was in July. And earlier this year, we launched the US brand, Green Chef, in the UK as well. We're committed to launch continuously different geographies in the next coming years and have a strong path ahead of us. With the acquisition of ready to eat companies like factor in the US and the plant acquisition of Youfoodz in Australia, we are diversifying our offer, now reaching even more and more untapped customer segments and increase our total address for the market. So by offering customers and growing range of different alternatives to shop food and to consume meals, we are charging towards this vision and this goal to become the world's leading integrated food solutions group. >> Love it. You guys are on a rocket ship. You're really transforming the industry. And as you expand your TAM, it brings us to sort of the data as a core part of that strategy. So maybe you guys could talk a little bit about your journey as a company, specifically as it relates to your data journey. I mean, you began as a startup, you had a basic architecture and like everyone, you've made extensive use of spreadsheets, you built a Hadoop based system that started to grow. And when the company IPO'd, you really started to explode. So maybe describe that journey from a data perspective. >> Yes, Dave. So HelloFresh by 2015, approximately had evolved what amount, a classical centralized data management set up. So we grew very organically over the years, and there were a lot of very smart people around the globe, really building the company and building our infrastructure. This also means that there were a small number of internal and external sources, data sources, and a centralized BI team with a number of people producing different reports, different dashboards and, and products for our executives, for example, or for different operations teams to see a company's performance and knowledge was transferred just by our talking to each other face-to-face conversations. And the people in the data warehouse team were considered as the data wizard or as the ETL wizard. Very classical challenges. And it was ETL, who reserved, indicated the kind of like a style of knowledge of data management, right? So our central data warehouse team then was responsible for different type of verticals in different domains, different geographies. And all this setup gave us in the beginning, the flexibility to grow fast as a company in 2015. >> Christoph, anything to add to that? >> Yes, not explicitly to that one, but as, as Clemence said, right, this was kind of the setup that actually worked for us quite a while. And then in 2017, when HelloFresh went public, the company also grew rapidly. And just to give you an idea how that looked like as well, the tech departments have actually increased from about 40 people to almost 300 engineers. And in the same way as the business units, as there Clemence has described, also grew sustainably. So we continue to launch HelloFresh in new countries, launched new brands like Every Plate, and also acquired other brands like we have Factor. And that grows also from a data perspective, the number of data requests that the central (mumbles), we're getting become more and more and more, and also more and more complex. So that for the team meant that they had a fairly high mental load. So they had to achieve a very, or basically get a very deep understanding about the business and also suffered a lot from this context, switching back and forth. Essentially, they had to prioritize across our product requests from our physical product, digital product, from a physical, from, sorry, from the marketing perspective, and also from the central reporting teams. And in a nutshell, this was very hard for these people, and that altered situations that let's say the solution that we have built. We can not really optimal. So in a, in a, in a, in a nutshell, the central function became a bottleneck and slow down of all the innovation of the company. >> It's a classic case. Isn't it? I mean, Clemence, you see, you see the central team becomes a bottleneck, and so the lines of business, the marketing team, sales teams say "Okay, we're going to take things into our own hands." And then of course IT and the technical team is called in later to clean up the mess. Maybe, maybe I'm overstating it, but, but that's a common situation. Isn't it? >> Yeah this is what exactly happened. Right. So we had a bottleneck, we had those central teams, there was always a bit of tension. Analytics teams then started in those business domains like marketing, supply chain, finance, HR, and so on started really to build their own data solutions. At some point you have to get the ball rolling, right? And then continue the trajectory, which means then that the data pipelines didn't meet the engineering standards. And there was an increased need for maintenance and support from central teams. Hence over time, the knowledge about those pipelines and how to maintain a particular infrastructure, for example, left the company, such that most of those data assets and data sets that turned into a huge debt with decreasing data quality, also decreasing lack of trust, decreasing transparency. And this was an increasing challenge where a majority of time was spent in meeting rooms to align on, on data quality for example. >> Yeah. And the point you were making Christoph about context switching, and this is, this is a point that Zhamak makes quite often as we've, we've, we've contextualized our operational systems like our sales systems, our marketing systems, but not our, our data systems. So you're asking the data team, okay, be an expert in sales, be an expert in marketing, be an expert in logistics, be an expert in supply chain and it's start, stop, start, stop. It's a paper cut environment, and it's just not as productive. But, but, and the flip side of that is when you think about a centralized organization, you think, hey, this is going to be a very efficient way across functional team to support the organization, but it's not necessarily the highest velocity, most effective organizational structure. >> Yeah. So, so I agree with that piece, that's up to a certain scale. A centralized function has a lot of advantages, right? So it's a tool for everyone, which would go to a destined kind of expert team. However, if you see that you actually would like to accelerate that in specific as the type of growth. But you want to actually have autonomy on certain teams and move the teams, or let's say the data to the experts in these teams. And this, as you have mentioned, right, that increases mental load. And you can either internally start splitting your team into different kinds of sub teams focusing on different areas, however, that is then again, just adding another piece where actually collaboration needs to happen because the external seized, so why not bridging that gap immediately and actually move these teams end to end into the, into the function themselves. So maybe just to continue what Clemence was saying, and this is actually where our, so, Clemence and my journey started to become one joint journey. So Clemence was coming actually from one of these teams who builds their own solutions. I was basically heading the platform team called data warehouse team these days. And in 2019, where (mumbles) become more and more serious, I would say, so more and more people have recognized that this model does not really scale, in 2019, basically the leadership of the company came together and identified data as a key strategic asset. And what we mean by that, that if he leveraged it in a, in a, an appropriate way, it gives us a unique, competitive advantage, which could help us to, to support and actually fully automate our decision making process across the entire value chain. So once we, what we're trying to do now, or what we would be aiming for is that HelloFresh is able to build data products that have a purpose. We're moving away from the idea that it's just a bi-product. We have a purpose why we would like to collect this data. There's a clear business need behind that. And because it's so important to, for the company as a business, we also want to provide them as a trustworthy asset to the rest of the organization. We'd say, this is the best customer experience, but at least in a way that users can easily discover, understand and securely access, high quality data. >> Yeah. So, and, and, and Clemence, when you see Zhamak's writing, you see, you know, she has the four pillars and the principles. As practitioners, you look at that say, okay, hey, that's pretty good thinking. And then now we have to apply it. And that's where the devil meets the details. So it's the for, the decentralized data ownership, data as a product, which we'll talk about a little bit, self-serve, which you guys have spent a lot of time on, and Clemence your wheelhouse, which is, which is governance and a federated governance model. And it's almost like if you, if you achieve the first two, then you have to solve for the second two, it almost creates a new challenges, but maybe you could talk about that a little bit as to how it relates to HelloFresh. >> Yes. So Chris has mentioned that we identified kind of a challenge beforehand and said, how can we actually decentralized and actually empower the different colleagues of ours? And this was more a, we realized that it was more an organizational or a cultural change. And this is something that someone also mentioned. I think ThoughtWorks mentioned one of the white papers, it's more of an organizational or a cultural impact. And we kicked off a phased reorganization, or different phases we're currently on, in the middle of still, but we kicked off different phases of organizational restructuring or reorganization trying to lock this data at scale. And the idea was really moving away from ever growing complex matrix organizations or matrix setups and split between two different things. One is the value creation. So basically when people ask the question, what can we actually do? What should we do? This is value creation and the how, which is capability building, and both are equal in authority. This actually then creates a high urge in collaboration and this collaboration breaks up the different silos that were built. And of course, this also includes different needs of staffing for teams staffing with more, let's say data scientists or data engineers, data professionals into those business domains, enhance, or some more capability building. >> Okay, go ahead. Sorry. >> So back to Zhamak Dehghani. So we, the idea also then crossed over when she published her papers in May, 2019. And we thought, well, the four pillars that she described were around decentralized data ownership, product, data as a product mindset, we have a self-service infrastructure. And as you mentioned, federated computational governance. And this suited very much with our thinking at that point of time to reorganize the different teams and this then that to not only organizational restructure, but also in completely new approach of how we need to manage data, through data. >> Got it. Okay. So your businesses is exploding. The data team was having to become domain experts to many areas, constantly context switching as we said, people started to take things into their own hands. So again, we said classic story, but, but you didn't let it get out of control and that's important. And so we, we actually have a picture of kind of where you're going today and it's evolved into this, Pat, if you could bring up the picture with the, the elephant, here we go. So I will talk a little bit about the architecture. It doesn't show it here, the spreadsheet era, but Christoph, maybe you could talk about that. It does show the Hadoop monolith, which exists today. I think that's in a managed hosting service, but, but you, you preserve that piece of it. But if I understand it correctly, everything is evolving to the cloud. I think you're running a lot of this or all of it in AWS. You've got, everybody's got their own data sources. You've got a data hub, which I think is enabled by a master catalog for discovery and all this underlying technical infrastructure that is, is really not the focus of this conversation today. But the key here, if I understand correctly is these domains are autonomous and that not only this required technical thinking, but really supportive organizational mindset, which we're going to talk about today. But, but Christoph, maybe you could address, you know, at a high level, some of the architectural evolution that you guys went through. >> Yeah, sure. Yeah. Maybe it's also a good summary about the entire history. So as you have mentioned, right, we started in the very beginning, it's a monolith on the operational plan, right? Actually it wasn't just one model it was two, one for the backend and one for the front end. And our analytical plan was essentially a couple of spreadsheets. And I think there's nothing wrong with spreadsheets, but it allows you to store information, it allows you to transform data, it allows you to share this information, it allows you to visualize this data, but all kind of, it's not actually separating concern, right? Every single one tool. And this means that it's obviously not scalable, right? You reach the point where this kind of management's set up in, or data management is in one tool, reached elements. So what we have started is we created our data lake, as we have seen here on our dupe. And just in the very beginning actually reflected very much our operation upon this. On top of that, we used Impala as a data warehouse, but there was not really a distinction between what is our data warehouse and what is our data lakes as the Impala was used as kind of both as a kind of engine to create a warehouse and data lake constructed itself. And this organic growth actually led to a situation. As I think it's clear now that we had the centralized model as, for all the domains that were really lose Kimball, the modeling standards and there's new uniformity we used to actually build, in-house, a base of building materialized use, of use that we have used for the presentation there. There was a lot of duplication of effort. And in the end, essentially the amendments and feedback tool, which helped us to, to improve of what we, have built during the end in a natural, as you said, the lack of trust. And this basically was a starting point for us to understand, okay, how can we move away? And there are a lot of different things that we can discuss of apart from this organizational structure that we have set up here, we have three or four pillars from Zhamak. However, there's also the next, extra question around, how do we implement product, right? What are the implications on that level and I think that is, that's something that we are, that we are currently still in progress. >> Got it. Okay. So I wonder if we could talk about, switch gears a little bit, and talk about the organizational and cultural challenges that you faced. What were those conversations like? And let's, let's dig into that a little bit. I want to get into governance as well. >> The conversations on the cultural change. I mean, yes, we went through a hyper growth through the last year, and obviously there were a lot of new joiners, a lot of different, very, very smart people joining the company, which then results that collaborations got a bit more difficult. Of course, the time zone changes. You have different, different artifacts that you had recreated in documentation that were flying around. So we were, we had to build the company from scratch, right? Of course, this then resulted always this tension, which I described before. But the most important part here is that data has always been a very important factor at HelloFresh, and we collected more of this data and continued to improve, use data to improve the different key areas of our business. Even when organizational struggles like the central (mumbles) struggles, data somehow always helped us to grow through this kind of change, right? In the end, those decentralized teams in our local geographies started with solutions that serve the business, which was very, very important. Otherwise, we wouldn't be at the place where we are today, but they did violate best practices and standards. And I always use the sports analogy, Dave. So like any sport, there are different rules and regulations that need to be followed. These routes are defined by, I'll call it, the sports association. And this is what you can think about other data governance and then our compliance team. Now we add the players to it who need to follow those rules and abide by them. This is what we then call data management. Now we have the different players, the professionals they also need to be trained and understand the strategy and the rules before they can play. And this is what I then called data literacy. So we realized that we need to focus on helping our teams to develop those capabilities and teach the standards for how work is being done to truly drive functional excellence in the different domains. And one of our ambition of our data literacy program for example, is to really empower every employee at HelloFresh, everyone, to make the right data-informed decisions by providing data education that scales (mumbles), and that can be different things. Different things like including data capabilities with, in the learning path for example, right? So help them to create and deploy data products, connecting data, producers, and data consumers, and create a common sense and more understanding of each other's dependencies, which is important. For example, SIS, SLO, state of contracts, et cetera, people get more of a sense of ownership and responsibility. Of course, we have to define what it means. What does ownership means? What does responsibility mean? But we are teaching this to our colleagues via individual learning patterns and help them upscale to use also their shared infrastructure, and those self-service data applications. And of all to summarize, we are still in this progress of learning. We're still learning as well. So learning never stops at Hello Fresh, but we are really trying this to make it as much fun as possible. And in the end, we all know user behavior is changed through positive experience. So instead of having massive training programs over endless courses of workshops, leaving our new joiners and colleagues confused and overwhelmed, we're applying gamification, right? So split different levels of certification where our colleagues, can access, have had access points. They can earn badges along the way, which then simplifies the process of learning and engagement of the users. And this is what we see in surveys, for example, where our employees value this gamification approach a lot and are even competing to collect those learning pet badges, to become the number one on the leaderboard. >> I love the gamification. I mean, we've seen it work so well in so many different industries, not the least of which is crypto. So you've identified some of the process gaps that you, you saw, you just gloss over them. Sometimes I say, pave the cow path. You didn't try to force. In other words, a new architecture into the legacy processes, you really had to rethink your approach to data management. So what did that entail? >> To rethink the way of data management, 100%. So if I take the example of revolution, industrial revolution or classical supply chain revolution, but just imagine that you have been riding a horse, for example, your whole life, and suddenly you can operate a car or you suddenly receive just a complete new way of transporting assets from A to B. So we needed to establish a new set of cross-functional business processes to run faster, drive faster, more robustly, and deliver data products which can be trusted and used by downstream processes and systems. Hence we had a subset of new standards and new procedures that would fall into the internal data governance and compliance sector. With internal, I'm always referring to the data operations around new things like data catalog, how to identify ownership, how to change ownership, how to certify data assets, everything around classical is software development, which we now apply to data. This, this is some old and new thinking, right? Deployment, versioning, QA, all the different things, ingestion policies, the deletion procedures, all the things that software development has been doing, we do it now with data as well. And it's simple terms, it's a whole redesign of the supply chain of our data with new procedures and new processes in asset creation, asset management and asset consumption. >> So data's become kind of the new development kit, if you will. I want to shift gears and talk about the notion of data product, and we have a slide that, that we pulled from your deck. And I'd like to unpack it a little bit. I'll just, if you can bring that up, I'll, I'll read it. A data product is a product whose primary objective is to leverage on data to solve customer problems, where customers are both internal and external. so pretty straightforward. I know you've, you've gone much deeper in your thinking and into your organization, but how do you think about that and how do you determine for instance, who owns what, how did you get everybody to agree? >> I can take that one. Maybe let me start as a data product. So I think that's an ongoing debate, right? And I think the debate itself is the important piece here, right? You mentioned the debate, you've clarified what we actually mean by that, a product, and what is actually the mindset. So I think just from a definition perspective, right? I think we find the common denominator that we say, okay, that our product is something which is important for the company that comes with value. What do you mean by that? Okay. It's a solution to a customer problem that delivers ideally maximum value to the business. And yes, leverage is the power of data. And we have a couple of examples, and I'll hit refresh here, the historical and classical ones around dashboards, for example, to monitor our error rates, but also more sophisticated based for example, to incorporate machine learning algorithms in our recipe recommendation. However, I think the important aspects of a data product is A: there is an owner, right? There's someone accountable for making sure that the product that you're providing is actually served and has maintained. And there are, there's someone who's making sure that this actually keeps the value of what we are promising. Combined with the idea of the proper documentation, like a product description, right? The people understand how to use it. What is this about? And related to that piece is the idea of, there's a purpose, right? We need to understand or ask ourselves, okay, why does a thing exist? Does it provide the value that we think it does? Then it leads in to a good understanding of what the life cycle of the data product and product life cycle. What do we mean? Okay. From the beginning, from the creation, you need to have a good understanding. You need to collect feedback. We need to learn about that, you need to rework, and actually finally, also to think about, okay, when is it time to decommission that piece So overall I think the core of this data product is product thinking 101, right? That we start, the point is, the starting point needs to be the problem and not the solution. And this is essentially what we have seen, what was missing, what brought us to this kind of data spaghetti that we have built there in Rush, essentially, we built it. Certain data assets develop in isolation and continuously patch the solution just to fulfill these ad hoc requests that we got and actually really understanding what the stakeholder needs. And the interesting piece as a results in duplication of (mumbled) And this is not just frustrating and probably not the most efficient way, how the company should work. But also if I build the same data assets, but slightly different assumption across the company and multiple teams that leads to data inconsistency. And imagine the following scenario. You, as a management, for management perspective, you're asking basically a specific question and you get essentially from a couple of different teams, different kinds of graphs, different kinds of data and numbers. And in the end, you do not know which ones to trust. So there's actually much (mumbles) but good. You do not know what actually is it noise for times of observing or is it just actually, is there actually a signal that I'm looking for? And the same as if I'm running an AB test, right? I have a new feature, I would like to understand what is the business impact of this feature? I run that with a specific source and an unfortunate scenario. Your production system is actually running on a different source. You see different numbers. What you have seen in the AB test is actually not what you see then in production, typical thing. Then as you asking some analytics team to actually do a deep dive, to understand where the discrepancies are coming from, worst case scenario again, there's a different kind of source. So in the end, it's a pretty frustrating scenario. And it's actually a waste of time of people that have to identify the root cause of this type of divergence. So in a nutshell, the highest degree of consistency is actually achieved if people are just reusing data assets. And also in the end, the meetup talk they've given, right? We start trying to establish this approach by AB testing. So we have a team, but just providing, or is kind of owning their target metric associated business teams, and they're providing that as a product also to other services, including the AB testing team. The AB testing team can use this information to find an interface say, okay, I'm drawing information for the metadata of an experiment. And in the end, after the assignment, after this data collection phase, they can easily add a graph to a dashboard just grouped by the AB testing barrier. And we have seen that also in other companies. So it's not just a nice dream that we have, right? I have actually looked at other companies maybe looked on search and we established a complete KPI pipeline that was computing all these information and this information both hosted by the team and those that (mumbles) AB testing, deep dives and, and regular reporting again. So just one last second, the, the important piece, Now, why I'm coming back to that is that it requires that we are treating this data as a product, right? If we want to have multiple people using the thing that I am owning and building, we have to provide this as a trust (mumbles) asset and in a way that it's easy for people to discover and to actually work with. >> Yeah. And coming back to that. So this is, to me this is why I get so excited about data mesh, because I really do think it's the right direction for organizations. When people hear data product, they think, "Well, what does that mean?" But then when you start to sort of define it as you did, it's using data to add value that could be cutting costs, that could be generating revenue, it could be actually directly creating a product that you monetize. So it's sort of in the eyes of the beholder, but I think the other point that we've made, is you made it earlier on too, and again, context. So when you have a centralized data team and you have all these P&L managers, a lot of times they'll question the data 'cause they don't own it. They're like, "Well, wait a minute." If it doesn't agree with their agenda, they'll attack the data. But if they own the data, then they're responsible for defending that. And that is a mindset change that's really important. And I'm curious is how you got to that ownership. Was it a top-down or was somebody providing leadership? Was it more organic bottom up? Was it a sort of a combination? How do you decide who owned what? In other words, you know, did you get, how did you get the business to take ownership of the data and what does owning the data actually mean? >> That's a very good question, Dave. I think that one of the pieces where I think we have a lot of learning and basically if you ask me how we could stop the filling, I think that would be the first piece that we need to start. Really think about how that should be approached. If it's staff has ownership, right? That means somehow that the team has the responsibility to host themselves the data assets to minimum acceptable standards. That's minimum dependencies up and down stream. The interesting piece has to be looking backwards. What was happening is that under that definition, this extra process that we have to go through is not actually transferring ownership from a central team to the other teams, but actually in most cases to establish ownership. I make this difference because saying we have to transfer ownership actually would erroneously suggest that the dataset was owned before, but this platform team, yes, they had the capability to make the change, but actually the analytics team, but always once we had the business understand the use cases and what no one actually bought, it's actually expensive, expected. So we had to go through this very lengthy process and establishing ownership, how we have done that as in the beginning, very naively started, here's a document, here are all the data assets, what is probably the nearest neighbor who can actually take care of that. And then we, we moved it over. But the problem here is that all these things is kind of technical debt, right? It's not really properly documented, pretty unstable. It was built in a very inconsistent way over years. And these people that built this thing have already left the company. So this is actually not a nice thing that you want to see and people build up a certain resistance, even if they have actually bought into this idea of domain ownership. So if you ask me these learnings, what needs to happen is first, the company needs to really understand what our core business concept that we have the need to have this mapping from this other core business concept that we have. These are the domain teams who are owning this concept, and then actually linked that to the, the assets and integrate that better, but suppose understanding how we can evolve, actually the data assets and new data builds things new and the, in this piece and the domain, but also how can we address reduction of technical depth and stabilizing what we have already. >> Thank you for that Christoph. So I want to turn a direction here and talk Clemence about governance. And I know that's an area that's passionate, you're passionate about. I pulled this slide from your deck, which I kind of messed up a little bit, sorry for that. But, but, but by the way, we're going to publish a link to the full video that you guys did. So we'll share that with folks, but it's one of the most challenging aspects of data mesh. If you're going to decentralize, you, you quickly realize this could be the wild west, as we talked about all over again. So how are you approaching governance? There's a lot of items on this slide that are, you know, underscore the complexity, whether it's privacy compliance, et cetera. So, so how did you approach this? >> It's yeah, it's about connecting those dots, right? So the aim of the data governance program is to promote the autonomy of every team while still ensuring that everybody has the right interoperability. So when we want to move from the wild west, riding horses to a civilized way of transport, I can take the example of modern street traffic. Like when all participants can maneuver independently, and as long as they follow the same rules and standards, everybody can remain compatible with each other and understand and learn from each other so we can avoid car crashes. So when I go from country to country, I do understand what the street infrastructure means. How do I drive my car? I can also read the traffic lights and the different signals. So likewise, as a business in HelloFresh we do operate autonomously and consequently need to follow those external and internal rules and standards set forth by the tradition in which we operate. So in order to prevent a, a car crash, we need to at least ensure compliance with regulations, to account for societies and our customers' increasing concern with data protection and privacy. So teaching and advocating this imaging, evangelizing this to everyone in the company was a key community or communication strategy. And of course, I mean, I mentioned data privacy, external factors, the same goes for internal regulations and processes to help our colleagues to adapt for this very new environment. So when I mentioned before, the new way of thinking, the new way of dealing and managing data, this of course implies that we need new processes and regulations for our colleagues as well. In a nutshell, then this means that data governance provides a framework for managing our people, the processes and technology and culture around our data traffic. And that governance must come together in order to have this effective program providing at least a common denominator is especially critical for shared data sets, which we have across our different geographies managed, and shared applications on shared infrastructure and applications. And as then consumed by centralized processes, for example, master data, everything, and all the metrics and KPIs, which are also used for a central steering. It's a big change, right? And our ultimate goal is to have this non-invasive federated, automated and computational governance. And for that, we can't just talk about it. We actually have to go deep and use case by use case and QC by PUC and generate learnings and learnings with the different teams. And this would be a classical approach of identifying the target structure, the target status, match it with the current status, by identifying together with the business teams, with the different domains and have a risk assessment, for example, to increase transparency because a lot of teams, they might not even know what kind of situation they might be. And this is where this training and this piece of data literacy comes into place, where we go in and trade based on the findings, based on the most valuable use case. And based on that, help our teams to do this change, to increase their capability. I just told a little bit more, I wouldn't say hand-holding, but a lot of guidance. >> Can I kind of kind of chime in quickly and (mumbled) below me, I mean, there's a lot of governance piece, but I think that is important. And if you're talking about documentation, for example, yes, we can go from team to team and tell these people, hey, you have to document your data assets and data catalog, or you have to establish a data contract and so on and forth. But if we would like to build data products at scale, following actual governance, we need to think about automation, right? We need to think about a lot of things that we can learn from engineering before, and just starts as simple things. Like if we would like to build up trust in our data products, right? And actually want to apply the same rigor and the best practices that we know from engineering. There are things that we can do. And we should probably think about what we can copy. And one example might be so the level of service level agreements, so that level objectives. So the level of indicators, right, that represent on a, on an engineering level, right? Are we providing services? They're representing the promises we make to our customer and to our consumers. These are the internal objectives that help us to keep those promises. And actually these audits of, of how we are tracking ourselves, how we are doing. And this is just one example of where I think the federated governance, governance comes into play, right? In an ideal world, you should not just talk about data as a product, but also data product that's code. That'd be say, okay, as most, as much as possible, right? Give the engineers the tool that they are familiar with, and actually not ask the product managers, for example, to document the data assets in the data catalog, but make it part of the configuration has as, as a, as a CDCI continuous delivery pipeline, as we typically see in other engineering, tasks through it and services maybe say, okay, there is configuration, we can think about PII, we can think about data quality monitoring, we can think about the ingestion data catalog and so on and forth. But I think ideally in a data product goals become a sort of templates that can be deployed and are actually rejected or verified at build time before we actually make them and deploy them to production. >> Yeah so it's like DevOps for data product. So, so I'm envisioning almost a three-phase approach to governance. And you're kind of, it sounds like you're in the early phase of it, call it phase zero, where there's learning, there's literacy, there's training education, there's kind of self-governance. And then there's some kind of oversight, some, a lot of manual stuff going on, and then you, you're trying to process builders at this phase and then you codify it and then you can automate it. Is that fair? >> Yeah. I would rather think, think about automation as early as possible in a way, and yes, it needs to be separate rules, but then actually start actually use case by use case. Is there anything that small piece that we can already automate? If just possible roll that out at the next extended step-by-step. >> Is there a role though, that adjudicates that? Is there a central, you know, chief state officer who's responsible for making sure people are complying or is it, how do you handle it? >> I mean, from a, from a, from a platform perspective, yes. This applies in to, to implement certain pieces, that we are saying are important and actually would like to implement, however, that is actually working very closely with the governance department, So it's Clemence's piece to understand that defy the policies that needs to be implemented. >> So good. So Clemence essentially, it's, it's, it's your responsibility to make sure that the policy is being followed. And then as you were saying, Christoph, you want to compress the time to automation as fast as possible. Is that, is that-- >> Yeah, so it's a really, it's a, what needs to be really clear is that it's always a split effort, right? So you can't just do one or the other thing, but there is some that really goes hand in hand because for the right information, for the right engineering tooling, we need to have the transparency first. I mean, code needs to be coded. So we kind of need to operate on the same level with the right understanding. So there's actually two things that are important, which is one it's policies and guidelines, but not only that, because more importantly or equally important is to align with the end-user and tech teams and engineering and really bridge between business value business teams and the engineering teams. >> Got it. So just a couple more questions, because we got to wrap up, I want to talk a little bit about the business outcome. I know it's hard to quantify and I'll talk about that in a moment, but, but major learnings, we've got some of the challenges that, that you cited. I'll just put them up here. We don't have to go detailed into this, but I just wanted to share with some folks, but my question, I mean, this is the advice for your peers question. If you had to do it differently, if you had a do over or a Mulligan, as we like to say for you, golfers, what, what would you do differently? >> I mean, I, can we start with, from, from the transformational challenge that understanding that it's also high load of cultural exchange. I think this is, this is important that a particular communication strategy needs to be put into place and people really need to be supported, right? So it's not that we go in and say, well, we have to change into, towards data mash, but naturally it's the human nature, nature, nature, we are kind of resistant to change, right? And (mumbles) uncomfortable. So we need to take that away by training and by communicating. Chris, you might want to add something to that. >> Definitely. I think the point that I've also made before, right? We need to acknowledge that data mesh it's an architectural scale, right? If you're looking for something which is necessary by huge companies who are vulnerable, that are product at scale. I mean, Dave, you mentioned that right, there are a lot of advantages to have a centralized team, but at some point it may make sense to actually decentralize here. And at this point, right, if you think about data mesh, you have to recognize that you're not building something on a green field. And I think there's a big learning, which is also reflected on the slide is, don't underestimate your baggage. It's typically is you come to a point where the old model doesn't work anymore. And as had a fresh write, we lost the trust in our data. And actually we have seen certain risks of slowing down our innovation. So we triggered that, this was triggering the need to actually change something. So at this transition applies that you took, we have a lot of technical depth accumulated over years. And I think what we have learned is that potentially we have, de-centralized some assets too early. This is not actually taking into account the maturity of the team. We are actually investigating too. And now we'll be actually in the face of correcting pieces of that one, right? But I think if you, if you, if you start from scratch, you have to understand, okay, is all my teams actually ready for taking on this new, this new capability? And you have to make sure that this is decentralization. You build up these capabilities and the teams, and as Clemence has mentioned, right? Make sure that you take the, the people on your journey. I think these are the pieces that also here it comes with this knowledge gap, right? That we need to think about hiring literacy, the technical depth I just talked about. And I think the, the last piece that I would add now, which is not here on the slide deck is also from our perspective, we started on the analytical layer because it was kind of where things are exploding, right? This is the bit where people feel the pain. But I think a lot of the efforts that we have started to actually modernize the current stage and data products, towards data mesh, we've understood that it always comes down basically to a proper shape of our operational plan. And I think what needs to happen is I think we got through a lot of pains, but the learning here is this needs to really be an, a commitment from the company. It needs to have an end to end. >> I think that point, that last point you made is so critical because I, I, I hear a lot from the vendor community about how they're going to make analytics better. And that's not, that's not unimportant, but, but true data product thinking and decentralized data organizations really have to operationalize in order to scale it. So these decisions around data architecture and organization, they're fundamental and lasting, it's not necessarily about an individual project ROI. They're going to be projects, sub projects, you know, within this architecture. But the architectural decision itself is organizational it's cultural and, and what's the best approach to support your business at scale. It really speaks to, to, to what you are, who you are as a company, how you operate and getting that right, as we've seen in the success of data-driven companies is, yields tremendous results. So I'll, I'll, I'll ask each of you to give, give us your final thoughts and then we'll wrap. Maybe. >> Just can I quickly, maybe just jumping on this piece, what you have mentioned, right, the target architecture. If you talk about these pieces, right, people often have this picture of (mumbled). Okay. There are different kinds of stages. We have (incomprehensible speech), we have actually a gesture layer, we have a storage layer, transformation layer, presentation data, and then we are basically putting a lot of technology on top of that. That's kind of our target architecture. However, I think what we really need to make sure is that we have these different kinds of views, right? We need to understand what are actually the capabilities that we need to know, what new goals, how does it look and feel from the different kinds of personas and experience view. And then finally that should actually go to the, to the target architecture from a technical perspective. Maybe just to give an outlook what we are planning to do, how we want to move that forward. Yes. Actually based on our strategy in the, in the sense of we would like to increase the maturity as a whole across the entire company. And this is kind of a framework around the business strategy and it's breaking down into four pillars as well. People meaning the data culture, data literacy, data organizational structure and so on. If you're talking about governance, as Clemence had actually mentioned that right, compliance, governance, data management, and so on, you're talking about technology. And I think we could talk for hours for that one it's around data platform, data science platform. And then finally also about enablements through data. Meaning we need to understand data quality, data accessibility and applied science and data monetization. >> Great. Thank you, Christoph. Clemence why don't you bring us home. Give us your final thoughts. >> Okay. I can just agree with Christoph that important is to understand what kind of maturity people have, but I understand we're at the maturity level, where a company, where people, our organization is, and really understand what does kind of, it's just kind of a change applies to that, those four pillars, for example, what needs to be tackled first. And this is not very clear from the very first beginning (mumbles). It's kind of like green field, you come up with must wins to come up with things that you really want to do out of theory and out of different white papers. Only if you really start conducting the first initiatives, you do understand that you are going to have to put those thoughts together. And where do I miss out on one of those four different pillars, people process technology and governance, but, and then that can often the integration like doing step by step, small steps, by small steps, not pulling the ocean where you're capable, really to identify the gaps and see where either you can fill the gaps or where you have to increase maturity first and train people or increase your tech stack. >> You know, HelloFresh is an excellent example of a company that is innovating. It was not born in Silicon Valley, which I love. It's a global company. And, and I got to ask you guys, it seems like it's just an amazing place to work. Are you guys hiring? >> Yes, definitely. We do. As, as mentioned right as well as one of these aspects distributing and actually hiring as an entire company, specifically for data. I think there are a lot of open roles, so yes, please visit or our page from data engineering, data, product management, and Clemence has a lot of roles that you can speak to about. But yes. >> Guys, thanks so much for sharing with theCUBE audience, you're, you're pioneers, and we look forward to collaborations in the future to track progress, and really want to thank you for your time. >> Thank you very much. >> Thank you very much Dave. >> And thank you for watching theCUBE's startup showcase made possible by AWS. This is Dave Volante. We'll see you next time. (cheerful music)

Published Date : Sep 15 2021

SUMMARY :

and the internal team it had the world in your field. Maybe take over the first and the plant acquisition And as you expand your TAM, the flexibility to grow So that for the team meant and so the lines of business, and so on started really to and the flip side of that say the data to the experts So it's the for, And the idea was really moving away Okay, go ahead. And as you mentioned, federated computational governance. is really not the focus of And in the end, and talk about the organizational And in the end, we all know user behavior not the least of which is crypto. So if I take the example of revolution, of the new development kit, And also in the end, So it's sort of in the the company needs to really but it's one of the most So the aim of the data governance and actually not ask the the early phase of it, that we can already automate? that defy the policies that the time to automation on the same level with the about the business outcome. So it's not that we go in and say, well, efforts that we have started to I hear a lot from the vendor in the sense of we would like Clemence why don't you bring us home. fill the gaps or where you And, and I got to ask you guys, that you can speak to about. collaborations in the future to track And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

ChristophPERSON

0.99+

ChrisPERSON

0.99+

Christoph SawadePERSON

0.99+

2015DATE

0.99+

Zhamak DehghaniPERSON

0.99+

YoufoodzORGANIZATION

0.99+

Dave VolantePERSON

0.99+

Clemence CheePERSON

0.99+

2019DATE

0.99+

NorwayLOCATION

0.99+

2017DATE

0.99+

AWSORGANIZATION

0.99+

May, 2019DATE

0.99+

UKLOCATION

0.99+

HelloFreshORGANIZATION

0.99+

ClemencePERSON

0.99+

Silicon ValleyLOCATION

0.99+

AustraliaLOCATION

0.99+

100%QUANTITY

0.99+

USLOCATION

0.99+

JulyDATE

0.99+

twoQUANTITY

0.99+

Clemence W. CheePERSON

0.99+

TwoQUANTITY

0.99+

TAMORGANIZATION

0.99+

oneQUANTITY

0.99+

threeQUANTITY

0.99+

Hello FreshORGANIZATION

0.99+

first pieceQUANTITY

0.99+

one toolQUANTITY

0.99+

last yearDATE

0.99+

last weekDATE

0.99+

two thingsQUANTITY

0.99+

ZhamakPERSON

0.99+

firstQUANTITY

0.99+

two years laterDATE

0.99+

PatPERSON

0.99+

second twoQUANTITY

0.99+

one last secondQUANTITY

0.99+

Green ChefORGANIZATION

0.99+

OneQUANTITY

0.98+

first twoQUANTITY

0.98+

one exampleQUANTITY

0.98+

bothQUANTITY

0.98+

one modelQUANTITY

0.98+

theCUBEORGANIZATION

0.97+

four pillarsQUANTITY

0.97+

Every PlateORGANIZATION

0.97+

todayDATE

0.97+

eachQUANTITY

0.97+

earlier this yearDATE

0.97+

Dave Rensin, Google | Google Cloud Next 2018


 

>> Live from San Francisco, it's The Cube. Covering Google Cloud Next 2018 brought to you by Google Cloud and its ecosystem partners. >> Welcome back everyone, it's The Cube live in San Francisco. At Google Cloud's big event, Next 18, GoogleNext18 is the hashtag. I'm John Furrier with Jeff Frick, our next guest, Dave Rensin, director of CRE and network capacity at Google. CRE stands for Customer Reliability Engineering, not to be confused with SRE which is Google's heralded program Site Reliability Engineering, categoric changer in the industry. Dave, great to have you on. Thanks for coming on. >> Thank you so much for having me. >> So we had a meeting a couple months ago and I was just so impressed by how much thought and engineering and business operations have been built around Google's infrastructure. It's a fascinating case study in history of computing, you guys obviously power yourselves and the Cloud is just massive. You've got the Site Reliability Engineer concept that now is, I won't say is a boiler plate, but it's certainly the guiding architecture for how enterprise is going to start to operate. Take a minute to explain the SRE and the CRE concept within Google. I think it's super important that you guys, again pioneered, something pretty amazing with the SRE program. >> Well, I mean, like everything it was just formed out of necessity for us. We did the calculation 12 or 13 years ago, I think. We sat down a piece of paper and we said, well, the number of people we need to run our systems scales linearly with the number of machines, which scales linearly with the number of users, and the complexity of the stuff you're doing. Alright, carry the two divide by six, plot line. In ten years, now this is 13 or 14 years ago, we're going to need one million humans to run google. And that was at the growth and complexity of 10 years ago or 12 years ago. >> Yeah, Search. (laughs) >> Search, right? We didn't have Android, we didn't have Cloud, we didn't have Assistant, we didn't have any of these things. We were like, well that's not going to work. We're going to have to do something different and so that's kind of where SRE came from. It's like, how do we automate, the basic philosophy is simple, give to the machines all the things machines can do. And keep for the humans all the things that require human judgment. And that's how we get to a place where like 2,500 SREs run all of Google. >> And that's massive and there's billions and billions of users. >> Yeah. >> Again, I think this is super important because at that time it was a tell sign for you guys to wake up and go, well I can't get a million humans. But it's now becoming, in my opinion, what this enterprise is going through in this digital transformation, whatever we call it these days, consumer's agent of IT now it's digital trasfor-- Whatever it is, the role of the human-machine interaction is now changing, people need to do more. They can collect more data than ever before. It doesn't cost them that much to collect data. >> Yeah. >> We just heard from the BigQuery guys, some amazing stuff happening. So now enterprises are almost going through the same changeover that you guys had to go through. And this I now super important because now you have the tooling and the scale that Google has. And so it's almost like it's a level up fast. So, how does an enterprise become SRE like, quickly, to take advantage of the Cloud? >> So, you know, I would like to say this is all sort of a deliberate march of a multi-year plan. But it wasn't, it was a little accidental. Starting two or three years ago, companies were asking us, they were saying, we're getting mired in toil. Like, we're not being able to innovate because we're spending all of our budget and effort just running the things and turning the crank. How do you have billions of users and not have this problem? We said, oh we use this thing called SRE. And they're like please use more words. And so we wrote a book. Right? And we expected maybe 20 people would read the book, and it was fine. And we didn't do it for any other reason other than that seemed like a very scalable way to tell people the words. And then it all just kind of exploded. We didn't expect that it was going to be true and so a couple of years ago we said, well, maybe we should formalize our interactions of, we should go out proactively and teach every enterprise we can how to do this and really work with them, and build up muscle memory. And that's where CRE comes from. That's my little corner of SRE. It's the part of SRE that, instead of being inward focused, we point out to companies. And our goal is that every firm from five to 50 thousand can follow these principles. And they can. wW know they can do it. And it's not as hard as they think. The funny thing about enterprises is they have this inferiority complex, like they've been told for years by Silicon Valley firms in sort of this derogatory way that, you're just an enterprise. We're the innovate-- That's-- >> Buy our stuff. Buy our software. Buy IT. >> We're smarter than you! And it's nonsense. There are hundreds and hundreds of thousands of really awesome engineers in these enterprises, right? And if you just give them a little latitude. And so anyway, we can walk these companies on this journey and it's been, I mean you've seen it, it's just been snowballing the last couple of years. >> Well the developers certainly have changed the game. We've seen with Cloud Native the role of developers doing toil and, or specific longer term projects at an app related IT would support them. So you had this traditional model that's been changed with agile et cetera. And dev ops, so that's great. So you know, golf clap for that. Now it's like scale >> No more than a golf clap it's been real. >> It's been a high five. Now it's like, they got to go to the next level. The next level is how do you scale it, how do I get more apps, how am I going to drive more revenue, not just reduce the cost? But now you got operators, now I have to operate things. So I think the persona of what operating something means, what you guys have hit with SRE, and CRE is part of that program, and that's really I think the aha moment. So that's where I see, and so how does someone read the book, put it in practice? Is it a cultural shift? Is it a reorganization? What are you guy seeing? What are some of the successes that you guys have been involved in? >> The biggest way to fail at doing SRE is try to do all of it at once. Don't do that. There are a few basic principles, that if you adhere to, the rest of it just comes organically at a pace that makes sense for your business. The easiest thing to think of, is simply-- If I had to distill it down to a few simple things, it's just this. Any system involving people is going to have errors. So any goal you have that assumes perfection, 100% uptime, 100% customer satisfaction, zero error, that kind of thing, is a lie. You're lying to yourself, you're lying to your customers. It's not just unrealistic its, in a way kind of immoral. So you got to embrace that. And then that difference between perfection and the amounts, the closeness to perfection that your customers really need, cuz they don't really need perfection, should be just a budget. We call it the error budget. Go spend the budget because above that line your customers are indifferent they don't care. And that unlocks innovation. >> So this is important, I want to just make sure I slow down on this, error budget is a concept that you're talking about. Explain that, because this is, I think, interesting. Because you're saying it's bs that there's no errors, because there's always errors, Right? >> Sure. >> So you just got to factor in and how you deal with them is-- But explain this error budget, because this operating philosophy of saying deal with errors, so explain this error budget concept. >> It comes from this observation, which is really fascinating. If you plot reliability and customer satisfaction on a graph what you will find is, for a while as your reliability goes up, your customer satisfaction goes up. Fantastic. And then there's a point, a magic line, after which you hit this really deep knee. And what you find is if you are much under that line your customers are angry, like pitchforks, torches, flipping cars, angry. And if you operate much above that line they are indifferent. Because, the network they connect with is less reliable than you. Or the phone they're using is less reliable than you. Or they're doing other things in their day than using your system, right? And so, there's a magic line, actually there's a term, it's called an SLO, Service Level Objective. And the difference between perfection, 100%, and the line you need, which is very business specific, we say treat as a budget. If you over spend your budget your customers aren't happy cuz you're less reliable than they need. But if you consistently under spend your budget, because they're indifferent to the change and because it is exponentially more expensive for incrementive improvement, that's literally resources you're wasting. You're wasting the one resource you can never get back, which is time. Spend it on innovation. And just that mental shift that we don't have to be perfect, less people do open and honest, blameless postmortems. It let's them embrace their risk in innovation. We go out of our way at Google to find people who accidentally broke something, took responsibility for it, redesigned the system so that the next unlucky person couldn't break it the same way, and then we promote them and celebrate them. >> So you push the error budget but then it's basically a way to do some experimentation, to do some innovation >> Safely. >> Safely. And what you're saying is, obviously the line of unhappy customers, it's like Gmail. When Gmail breaks people are like, the World freaks out, right? But, I'm happy with Gmail right now. It's working. >> But here's the thing, Gmail breaks very, very little. Very, very often. >> I never noticed it breaking. >> Will you notice the difference between 10 milliseconds of delivery time? No, of course not. Now, would you notice an hour or whatever? There's a line, you would for sure notice. >> That's the SLO line. >> That's exactly right. >> You're also saying that if you try to push above that, it costs more and there's not >> And you don't care >> An incremental benefit >> That's right. >> It doesn't effect my satisfaction. >> Yeah, you don't care. >> I'm at nirvana, now I'm happy. >> Yeah. >> Okay, and so what does that mean now for putting things in practice? What's the ideal error budget, that's an SLO? Is that part of the objective? >> Well that's part of the work to do as a business. And that's part of what my team does, is help you figure out is, what is the SLO, what is the error budget that makes sense for you for this application? And it's different. A medical device manufacturer is going to have a different SLO than a bank or a retailer, right? And the shapes are different. >> And it's interesting, we hear SLA, the Service Level Agreement, it's an old term >> Different things. >> Different things, here objective if I get this right, is not just about speed and feeds. There's also qualitative user experience objectives, right? So, am I getting that right? >> Very much so. SLOs and SLAs get confused a lot because they share two letters. But they don't mean anywhere near the same thing. An SLA is a legal agreement. It's a contract with your user that describes a penalty if you don't meet a certain performance. Lawyers, and sometimes sales or marketing people, drive SLAs. SLOs are different things driven by engineers. They are quantitative measures of your users happiness right now. And exactly to your point, it's always from the user's perspective. Like, your user does not care if the CPU and your fleet spiked. Or the memory usage went up x. They care, did my mail delivery slow down? Or is my load balancer not serving things? So, focus from your user backwards into your systems and then you get much saner things to track. >> Dave, great conversation. I love the innovation, I love the operating philosophy cuz you're really nailing it with terms of you want to make people happy but you're also pushing the envelope. You want to get these error budgets so we can experiment and learn, and not repeat the same mistake. That sounds like automation to me. But I want you to take a minute to explain, what SRE, that's an inward facing thing for Google, you are called a CRE, Customer Reliability Engineer. Explain what that is because I heard Diane Greene saying, we're taking a vertical focus. She mentioned healthcare. Seems like Google is starting to get in, and applying a lot of resources, to the field, customers. What is a CRE? What does that mean? How is that a part of SRE? Explain that. >> So a couple of years ago, when I was first hired at Google I was hired to build and run Cloud support. And one of the things I noticed, which you notice when you talk to customers a lot, is you know the industries done a really fabulous job of telling people how to get to Cloud. I used to work at Amazon. Amazon is a fantastic job! Telling people, how do you get to Cloud? How do you build a thing? But we're awful, as an industry, about telling them how to live there. How do you run it? Cuz it's different running a thing in a Cloud than it is running it in On-Prem. And you find that's the cause of a lot of friction for people. Not that they built it wrong, but they're just operating it in a way that's not quite compatible. It's a few degree off. And so we have this notion of, well we know how to operate these things to scale, that's what SRE is. What if, what if, we did a crazy thing? We took some of our SREs and instead of pointing them in at our production systems, we pointed them out at customers? Like what if we genetically screened our SREs for, can talk to human, instead of can talk to machine? Which is what you optimize for when you hire an engineer. And so we started Siri, it's this part of our SRE org that we point outwards to customer. And our job is to walk that path with you and really do it to get like-- sometimes we go so far as even to share a pager with you. And really get you to that place where your operations look a lot like we're talking that same language. >> It's custom too, you're looking at their environment. >> Oh yeah, it's bespoke. And then we also try to do scale things. We did the first SRE book. At the show just two days ago we launched the companion volume to the book, which is like-- cheap plug segment, where it's the implementation details. The first book's sort of a set of principles, these are the implementation details. Anything we can do to close that gap, I don't know if I ever told you the story, but when I was a little kid when I was like six. Like 1978, my dad who's always loved technology decided he was going to buy a personal computer. So he went to the largest retailer of personal computers in North America, Macy's in 1978, (laughs) and he came home with two things. He came home with a huge box and a human named Fred. And Fred the human unpacked the big box and set up the monitor, and the tape drive, and the keyboard, and told us about hardware and software and booting up, because who knew any of these things in 1978? And it's a funny story that you needed a human named Fred. My view is, I want to close the gap so that Siri are the Freds. Like, in a few years it'll be funny that you would ever need humans, from Google or anyone else, to help you learn how-- >> It's really helping people operate their new environment at a whole. It's a new first generation problem. >> Yeah. >> Essentially. Well, Dave great stuff. Final question, I want to get your thoughts. Great that we can have this conversation. You should come to the studio and go more and more deeper on this, I think it's a super important, and new role with SRES and CREs. But the show here, if you zoom out and look at Google Cloud, look down on the stage of what's going on this week, what's the most important story that should be told that's coming out of Google Cloud? Across all the announcements, what's the most important thing that people should be aware of? >> Wow, I have a definite set of biases, that won't lie. To me, the three most exciting announcements were GKE On-Prem, the idea that manage kubernetes you can actually run in your own environment. People have been saying for years that hybrid wasn't really a thing. Hybrid's a thing and it's going to be a thing for a long time, especially in enterprises. That's one. I think the introduction of machine learning to BigQuery, like anything we can do to bring those machine learning tools into these petabytes-- I mean, you mentioned it earlier. We are now collecting so much data not only can we not, as companies, we can't manage it. We can't even hire enough humans to figure out the right questions. So that's a big thing. And then, selfishly, in my own view of it because of reliability, the idea that Stackdriver will let you set up SLO dashboards and SLO alerting, to me that's a big win too. Those are my top three. >> Dave, great to have you on. Our SLO at The Cube is to bring the best content we possibly can, the most interviews at an event, and get the data and share that with you live. It's The Cube here at Google Cloud Next 18 I'm John Furrier with Jeff Frick. Stay with us, we've got more great content coming. We'll be right back after this short break.

Published Date : Jul 26 2018

SUMMARY :

brought to you by Google Cloud Dave, great to have you on. and the CRE concept within Google. and the complexity of the stuff you're doing. Yeah, Search. And keep for the humans And that's massive at that time it was a tell sign for you guys the same changeover that you guys and effort just running the things Buy our stuff. And if you just give them a little latitude. So you had this traditional model it's been real. and so how does someone read the book, the closeness to perfection error budget is a concept that you're talking about. and how you deal with them is-- and the line you need, obviously the line of unhappy customers, But here's the thing, Will you notice the difference between And the shapes are different. So, am I getting that right? and then you get much saner things to track. and not repeat the same mistake. And our job is to walk that path with you It's custom too, And it's a funny story that you needed It's a new first generation problem. Great that we can have this conversation. the idea that Stackdriver will let you and get the data and share that with you live.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave RensinPERSON

0.99+

Jeff FrickPERSON

0.99+

AmazonORGANIZATION

0.99+

Diane GreenePERSON

0.99+

DavePERSON

0.99+

100%QUANTITY

0.99+

1978DATE

0.99+

SiriTITLE

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

FredPERSON

0.99+

hundredsQUANTITY

0.99+

20 peopleQUANTITY

0.99+

North AmericaLOCATION

0.99+

two lettersQUANTITY

0.99+

10 millisecondsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

firstQUANTITY

0.99+

sixQUANTITY

0.99+

first bookQUANTITY

0.99+

fiveQUANTITY

0.99+

AndroidTITLE

0.99+

twoQUANTITY

0.99+

an hourQUANTITY

0.99+

two thingsQUANTITY

0.99+

twoDATE

0.99+

The CubeORGANIZATION

0.98+

2,500 SREsQUANTITY

0.98+

GmailTITLE

0.98+

SREORGANIZATION

0.98+

10 years agoDATE

0.98+

MacyORGANIZATION

0.98+

12 years agoDATE

0.98+

oneQUANTITY

0.98+

two days agoDATE

0.98+

Google CloudTITLE

0.97+

three years agoDATE

0.97+

googleORGANIZATION

0.96+

first generationQUANTITY

0.96+

zero errorQUANTITY

0.96+

50 thousandQUANTITY

0.94+

GoogleNext18EVENT

0.94+

13DATE

0.93+

SRETITLE

0.93+

couple of years agoDATE

0.92+

Silicon ValleyLOCATION

0.91+

CREORGANIZATION

0.91+

couple months agoDATE

0.91+

CloudTITLE

0.91+

agileTITLE

0.9+

Google CloudORGANIZATION

0.9+

AssistantTITLE

0.89+

one million humansQUANTITY

0.89+

14 years agoDATE

0.89+

SLATITLE

0.88+

ten yearsQUANTITY

0.87+

12DATE

0.86+

StackdriverORGANIZATION

0.86+

last couple of yearsDATE

0.85+

Andy Thulin & Wendy Wintersteen | Food IT 2017


 

>> Announcer: From the Computer History Museum in the heart of Silicon Valley, it's the Cube, covering Food It, Fork to Farm. Brought to you by Western Digital. >> Hey, welcome back everybody. Jeff Frick here at the Cube. We're in Silicon Valley at the Computer History Museum which celebrates history but we're talking about tech in the food and agricultural space. Here at the Food IT Convention, about 350 people, somebody came all the way from New Zealand, got food manufacturers. We've got tech people, we've got big companies, start-ups and we have a lot of represents from academe which is always excited to have them on, so our next guest is Dr. Andy Thulin, he's the Dean of the College of Agriculture, Food and Environmental Sciences at Cal Poly, San Luis Obispo, or SLO as we like to call them. Welcome. >> That's right. >> And all the way from Iowa, we have Dr. Wendy Wintersteen. She's the Dean of College of Agricultural and Life Sciences at Iowa State. Welcome. >> Thank you, it's great to be here. >> Absolutely, so first off, just kind of your impressions of this event? Small, intimate affair, one actually introduced everyone this morning, which I thought was a pretty interesting thing. Kind of your first impressions. >> It's a great environment. We have this mix of technology and a few production people here, but people thinking about the future. That's always an exciting place to be. >> Really, the environment, having the little set of exhibits, where people can go around, visit with entrepreneurs. It really, a great setting, I think for the discussion. >> So, Wendy, when you introduced your portion on the panel, you talked about the scale on which Iowa produces a lot of things. Pigs, and corns, and eggs, and chickens, and, so, you've been watchin' this space for a while. How do you see, from your perspective, kind of this technology wave, as it hits. Is it new, have we just not been payin' attention? Or is there something different now? >> Well, I think the speed of adoption, the speed of innovation is increasing, clearly. But, it's been a long time now that we've had power drive tractors so the farmers can sit and work on the technology in the cab related to their soil mapping, or yield monitors and the tractor's driving itself. So, we've had that sort of thing in Iowa for a long time and that continues to be improved upon, but that'd be just one example of what we're seeing. And, obviously, California has a huge agricultural presence, again, some people know, some people don't, the valley from top to bottom is something on the order of 500 miles of a whole lot of agriculture, so again, does this, do you see things changing? Is this more of the same? >> No, absolutely changing. I mean California produces some, a little over 400 different products. A lot of 'em, about a 100 of 'em, lead the country, in terms of marketplace. So, there's a lot of technology with the issues of water, lack thereof, or cleaning it up, or the labor challenges that we have for harvesting products. It's really turned into quite a challenge, so challenge drives innovation, you know, when you have your back against the wall, For example, in the strawberry fields I think, a year ago they had $800 million worth of labor to produce $2.4 million, billion dollars worth of strawberries. When you think about that, that's a lot of labor. When you can't get that labor in, you're drivin' by it, you got $300 million, wherever, they just weren't able to harvest it all 'cuz there was nobody to pick 'em. So, when you think about that, it's a billion dollars. It's a billion dollars that they couldn't get to. That drives innovation, so there's a lot of innovation goin' in these products. >> Pretty interesting, 'cuz, obviously, the water one jumps out, especially here in California, you know we had a really wet winter. The reservoirs are full. In fact, they're lettin' water out of the things. I would say we don't have a water problem, we have a water storage problem. This came up earlier today. The points of emphasis change, the points of pain change, and labor came up earlier. The number of people, the minimum wage laws, and the immigration stuff that's going on. Again, that's a real concern if you've got a billion dollars worth of strawberries sittin' in a field that you can't get to. >> Yeah, it's a real challenge. California faces a couple of shortages. We've got a water shortage, we've got a labor shortage, but we also have a talent shortage. We were talking this morning about the number of young people going to Ag colleges. It's up dramatically and we need all that talent and more. Everyone needs, all the grain industry, if you will, across the country, all the people that run these farms and ranches, and all, they're getting older. Who's coming back behind them? It's a technology driven industry today. It's not something that you can just go out and pick it up and start doing. It takes talent and science and technology to manage these operations. >> So, it's interesting. There's been science on kind of the genetic engineering if you will, genetically modified foods for a long time. Monsanto is always in the newspaper. But I asked something that's kind of funny, right, 'cuz we've been genetically modifying our food for a long time. Again, drive up and down I-5 and you'll see the funny looking walnut trees, that clearly didn't grow that way with a solid base on the bottom and a high-yield top. So, talk about attitudes, about this and people want it all. They want organic, but they also want it to look beautiful and perfect, be priced right and delivered from a local farmer. There's no simple solution to these problems. There's a lot of trade-offs that people have to make based on value so I wonder if you could talk about how that's evolving, Wendy, from your point of view. >> Well, certainly as we think about the products we produce in Iowa, we know that producers are willing to produce whatever the consumer would like. But they really want to be assured they have a market, so, right now in Iowa, we have cage-free eggs being produced, and those are being produced because there's a contract with a buyer, and, so I think producers are willing to adapt and address different opportunities in the big markets, different segments of that market, if they can see that profit opportunity that will allow them to continue in their business. From the producer's point of view, the subtheme of this show is Fork to Farm, as opposed to Farm to Fork which you think is the logical way, but it's come up and it's been discussed here quite a bit. It's the consumer, again, like they're doing in every business, is demanding what they want, they're willing to pay, and they're very specific in what they want. Was this like a sudden wave that hit from the producer point of view, or is this an opportunity? Is this a challenge? How is that kind of shifting market dynamics, impacting the producers? >> Well, I think it's all being driven by technology. We're talkin' this morning, years ago, it was the expert, you know, Wendy's of the world they had all the knowledge and then you had all the consumers listening to 'em and trusting 'em. Today, you have, as I call it, the mama tribe, or the soccer tribe, or that sort of thing, where they're listening to other parents, other mothers in that group, they're listening to the blogs, they're listening to their friends, that's driving the conversation and there's less science and technology behind it. They don't trust and the transparency thing comes up constantly. Technology has allowed this just wide open space where now they got so much information, how do they process that. What's real, what's not real, in terms of biotech, or is it this, or is it that? Is it wholesome, you know, all these factors. >> It's funny 'cuz you brought up the transparency earlier today as well, so people know what they're getting, they want to know, they really care. They just don't want to just get whatever generic ABC, like they used to. >> Right, and I think, again, there's a certain segment of the market that is very interested in that and companies are responding. I give the example of Nestles, and so, you get on their web page and you can see the ability to scan the code on a particular product and go and get a lot of information about that product back on the web page of that company. I think that for certain groups of consumers that's going to become even more important, and we have to be prepared to meet that demand. >> So, in terms of what's going on at your academic institutions, how is the environment changing because of technology, we've got these huge macro trends happening, right, cloud is a big thing, Edge Computing, which is obviously important, got to get the cloud to the edge (laughs) of the farm, sensors, big data, being able to collect all this data, I think somebody earlier said it went from no data to now a flood of data, how are you managing that? Better analytics and then, of course, there's fun stuff like drones and some of these other things that can now be applied. How's that workin' it's way into what you're doing in terms of training the next generation of entrepreneurs as well as the kind of traditional farmers in this space? >> Well, I think, first of all, we're seeing a lot more integration between what we do in engineering, and what we do in computer science, and what we do in agriculture and business. The overlap and the connection across those disciplines is occurring not just with our faculty but also with our students. We had a group of students at Iowa State before they graduated from the college, able to start a company called ScoutPro that was based on using technology to help farmers identify pests in the field, and that became a company using the technology to do that. Of course, that relied on software development, as well as clear understanding of agronomic and pest management strategy. I think those integrated approaches are occurring more and more. >> I think at Cal Poly it's, our motto has been for over a hundred years Learn by Doing, hands-on learning. That's key to us, as you have a lecture class, you have a lab that goes along with it so they're forced to. We have over 45 to 50 classes, enterprise classes, where you can come in and you can raise, let's say marigolds and then you can provide that whole value train, chain and sell it. You can raise broiler chicks every quarter, for 35 days you can raise 'em up, 7,000 birds and there's teams of students in these classes, they can do it, then they manage the whole process. A winery, for example, it's a bonded winery. They do the whole process. They know how to change the pumps and all that, so it's hands-on but you take that from there up to where those students go out into the industry. Our university just signed an agreement with Amazon for the cloud, so we're moving the whole complex, our IT, to the cloud through that organization. Is that right or wrong, I don't know, but we've got to do things faster, quicker, and just our infrastructure, would a cost us millions to do that, but that allowed the students, what is it, Apple is only, the iPhone is 10 years old tomorrow. Tomorrow. These kids, that's all they grew up with. So, we're constantly having to change our faculty, our leadership teams, constantly have to change to keep up and stay side-by-side with the technology, so it's changed our Center for Innovation and Entrepreneurship. Cal Poly has a partnership with the community, with the university, it started in College of Business and we have a whole floor of a building in downtown San Luis Obispo and across the street we've got 60 apartments for students that are involved in these start-ups to live there so they can walk across the street, get right engaged. So, we're trying to do everything we can, every university is trying to do everything they can to kind of keep this space flowing, and this enthusiasm with these young people. That's where the change is going to occur. >> Right, right. Exciting times. >> It is exciting. >> It is. >> Alright, well, unfortunately, we are out of time. So, we're going to have to leave it there, but I really want to thank you for stopping by and wish you both safe travels home. >> Thank you very much. >> Thank you. >> Dr. Thulin, Dr. Winterston, I'm Jeff Frick. You're watching the Cube. It's Food IT in Mountain View, California. Thanks for watching. We'll be right back after this short break. (electronic music)

Published Date : Jun 28 2017

SUMMARY :

Brought to you by Western Digital. We're in Silicon Valley at the Computer History Museum And all the way from Iowa, we have Dr. Wendy Wintersteen. of this event? That's always an exciting place to be. Really, the environment, having the little So, Wendy, when you introduced your portion on the panel, and that continues to be improved upon, or the labor challenges that we have and the immigration stuff that's going on. Everyone needs, all the grain industry, if you will, Monsanto is always in the newspaper. the subtheme of this show is Fork to Farm, the consumers listening to 'em and trusting 'em. It's funny 'cuz you brought up the transparency and you can see the ability to scan the code how is the environment changing because of technology, The overlap and the connection across those disciplines They do the whole process. Right, right. and wish you both safe travels home. It's Food IT in Mountain View, California.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy ThulinPERSON

0.99+

IowaLOCATION

0.99+

CaliforniaLOCATION

0.99+

New ZealandLOCATION

0.99+

Wendy WintersteenPERSON

0.99+

WinterstonPERSON

0.99+

WendyPERSON

0.99+

$300 millionQUANTITY

0.99+

Western DigitalORGANIZATION

0.99+

Cal PolyORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

$800 millionQUANTITY

0.99+

AppleORGANIZATION

0.99+

7,000 birdsQUANTITY

0.99+

$2.4 millionQUANTITY

0.99+

500 milesQUANTITY

0.99+

ScoutProORGANIZATION

0.99+

Mountain View, CaliforniaLOCATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

60 apartmentsQUANTITY

0.99+

TomorrowDATE

0.99+

ThulinPERSON

0.99+

millionsQUANTITY

0.99+

TodayDATE

0.99+

a year agoDATE

0.99+

tomorrowDATE

0.99+

MonsantoORGANIZATION

0.98+

35 daysQUANTITY

0.98+

billion dollarsQUANTITY

0.98+

Center for Innovation and EntrepreneurshipORGANIZATION

0.98+

College of Agriculture, Food and Environmental SciencesORGANIZATION

0.97+

years agoDATE

0.97+

firstQUANTITY

0.97+

bothQUANTITY

0.97+

San Luis ObispoLOCATION

0.96+

2017DATE

0.96+

50 classesQUANTITY

0.96+

this morningDATE

0.96+

over 45QUANTITY

0.96+

about 350 peopleQUANTITY

0.96+

first impressionsQUANTITY

0.95+

Fork to FarmTITLE

0.95+

over a hundred yearsQUANTITY

0.95+

Food IT ConventionEVENT

0.94+

NestlesORGANIZATION

0.94+

over 400 different productsQUANTITY

0.93+

Food ITORGANIZATION

0.93+

College of Agricultural and Life SciencesORGANIZATION

0.91+

10 years oldQUANTITY

0.89+

Farm to ForkTITLE

0.89+

DeanPERSON

0.88+

one exampleQUANTITY

0.87+

ABCORGANIZATION

0.87+

todayDATE

0.85+

Computer History MuseumORGANIZATION

0.83+

about a 100QUANTITY

0.81+

SLOORGANIZATION

0.79+

Dr.PERSON

0.79+

Computer History MuseumLOCATION

0.79+

earlier todayDATE

0.78+

ITORGANIZATION

0.66+

Iowa StateLOCATION

0.62+

CubeORGANIZATION

0.54+

I-5OTHER

0.49+

CubePERSON

0.47+

FoodEVENT

0.32+