Image Title

Search Results for Acton:

Breaking Analysis: We Have the Data…What Private Tech Companies Don’t Tell you About Their Business


 

>> From The Cube Studios in Palo Alto and Boston, bringing you data driven insights from The Cube at ETR. This is "Breaking Analysis" with Dave Vellante. >> The reverse momentum in tech stocks caused by rising interest rates, less attractive discounted cash flow models, and more tepid forward guidance, can be easily measured by public market valuations. And while there's lots of discussion about the impact on private companies and cash runway and 409A valuations, measuring the performance of non-public companies isn't as easy. IPOs have dried up and public statements by private companies, of course, they accentuate the good and they kind of hide the bad. Real data, unless you're an insider, is hard to find. Hello and welcome to this week's "Wikibon Cube Insights" powered by ETR. In this "Breaking Analysis", we unlock some of the secrets that non-public, emerging tech companies may or may not be sharing. And we do this by introducing you to a capability from ETR that we've not exposed you to over the past couple of years, it's called the Emerging Technologies Survey, and it is packed with sentiment data and performance data based on surveys of more than a thousand CIOs and IT buyers covering more than 400 companies. And we've invited back our colleague, Erik Bradley of ETR to help explain the survey and the data that we're going to cover today. Erik, this survey is something that I've not personally spent much time on, but I'm blown away at the data. It's really unique and detailed. First of all, welcome. Good to see you again. >> Great to see you too, Dave, and I'm really happy to be talking about the ETS or the Emerging Technology Survey. Even our own clients of constituents probably don't spend as much time in here as they should. >> Yeah, because there's so much in the mainstream, but let's pull up a slide to bring out the survey composition. Tell us about the study. How often do you run it? What's the background and the methodology? >> Yeah, you were just spot on the way you were talking about the private tech companies out there. So what we did is we decided to take all the vendors that we track that are not yet public and move 'em over to the ETS. And there isn't a lot of information out there. If you're not in Silicon (indistinct), you're not going to get this stuff. So PitchBook and Tech Crunch are two out there that gives some data on these guys. But what we really wanted to do was go out to our community. We have 6,000, ITDMs in our community. We wanted to ask them, "Are you aware of these companies? And if so, are you allocating any resources to them? Are you planning to evaluate them," and really just kind of figure out what we can do. So this particular survey, as you can see, 1000 plus responses, over 450 vendors that we track. And essentially what we're trying to do here is talk about your evaluation and awareness of these companies and also your utilization. And also if you're not utilizing 'em, then we can also figure out your sales conversion or churn. So this is interesting, not only for the ITDMs themselves to figure out what their peers are evaluating and what they should put in POCs against the big guys when contracts come up. But it's also really interesting for the tech vendors themselves to see how they're performing. >> And you can see 2/3 of the respondents are director level of above. You got 28% is C-suite. There is of course a North America bias, 70, 75% is North America. But these smaller companies, you know, that's when they start doing business. So, okay. We're going to do a couple of things here today. First, we're going to give you the big picture across the sectors that ETR covers within the ETS survey. And then we're going to look at the high and low sentiment for the larger private companies. And then we're going to do the same for the smaller private companies, the ones that don't have as much mindshare. And then I'm going to put those two groups together and we're going to look at two dimensions, actually three dimensions, which companies are being evaluated the most. Second, companies are getting the most usage and adoption of their offerings. And then third, which companies are seeing the highest churn rates, which of course is a silent killer of companies. And then finally, we're going to look at the sentiment and mindshare for two key areas that we like to cover often here on "Breaking Analysis", security and data. And data comprises database, including data warehousing, and then big data analytics is the second part of data. And then machine learning and AI is the third section within data that we're going to look at. Now, one other thing before we get into it, ETR very often will include open source offerings in the mix, even though they're not companies like TensorFlow or Kubernetes, for example. And we'll call that out during this discussion. The reason this is done is for context, because everyone is using open source. It is the heart of innovation and many business models are super glued to an open source offering, like take MariaDB, for example. There's the foundation and then there's with the open source code and then there, of course, the company that sells services around the offering. Okay, so let's first look at the highest and lowest sentiment among these private firms, the ones that have the highest mindshare. So they're naturally going to be somewhat larger. And we do this on two dimensions, sentiment on the vertical axis and mindshare on the horizontal axis and note the open source tool, see Kubernetes, Postgres, Kafka, TensorFlow, Jenkins, Grafana, et cetera. So Erik, please explain what we're looking at here, how it's derived and what the data tells us. >> Certainly, so there is a lot here, so we're going to break it down first of all by explaining just what mindshare and net sentiment is. You explain the axis. We have so many evaluation metrics, but we need to aggregate them into one so that way we can rank against each other. Net sentiment is really the aggregation of all the positive and subtracting out the negative. So the net sentiment is a very quick way of looking at where these companies stand versus their peers in their sectors and sub sectors. Mindshare is basically the awareness of them, which is good for very early stage companies. And you'll see some names on here that are obviously been around for a very long time. And they're clearly be the bigger on the axis on the outside. Kubernetes, for instance, as you mentioned, is open source. This de facto standard for all container orchestration, and it should be that far up into the right, because that's what everyone's using. In fact, the open source leaders are so prevalent in the emerging technology survey that we break them out later in our analysis, 'cause it's really not fair to include them and compare them to the actual companies that are providing the support and the security around that open source technology. But no survey, no analysis, no research would be complete without including these open source tech. So what we're looking at here, if I can just get away from the open source names, we see other things like Databricks and OneTrust . They're repeating as top net sentiment performers here. And then also the design vendors. People don't spend a lot of time on 'em, but Miro and Figma. This is their third survey in a row where they're just dominating that sentiment overall. And Adobe should probably take note of that because they're really coming after them. But Databricks, we all know probably would've been a public company by now if the market hadn't turned, but you can see just how dominant they are in a survey of nothing but private companies. And we'll see that again when we talk about the database later. >> And I'll just add, so you see automation anywhere on there, the big UiPath competitor company that was not able to get to the public markets. They've been trying. Snyk, Peter McKay's company, they've raised a bunch of money, big security player. They're doing some really interesting things in developer security, helping developers secure the data flow, H2O.ai, Dataiku AI company. We saw them at the Snowflake Summit. Redis Labs, Netskope and security. So a lot of names that we know that ultimately we think are probably going to be hitting the public market. Okay, here's the same view for private companies with less mindshare, Erik. Take us through this one. >> On the previous slide too real quickly, I wanted to pull that security scorecard and we'll get back into it. But this is a newcomer, that I couldn't believe how strong their data was, but we'll bring that up in a second. Now, when we go to the ones of lower mindshare, it's interesting to talk about open source, right? Kubernetes was all the way on the top right. Everyone uses containers. Here we see Istio up there. Not everyone is using service mesh as much. And that's why Istio is in the smaller breakout. But still when you talk about net sentiment, it's about the leader, it's the highest one there is. So really interesting to point out. Then we see other names like Collibra in the data side really performing well. And again, as always security, very well represented here. We have Aqua, Wiz, Armis, which is a standout in this survey this time around. They do IoT security. I hadn't even heard of them until I started digging into the data here. And I couldn't believe how well they were doing. And then of course you have AnyScale, which is doing a second best in this and the best name in the survey Hugging Face, which is a machine learning AI tool. Also doing really well on a net sentiment, but they're not as far along on that access of mindshare just yet. So these are again, emerging companies that might not be as well represented in the enterprise as they will be in a couple of years. >> Hugging Face sounds like something you do with your two year old. Like you said, you see high performers, AnyScale do machine learning and you mentioned them. They came out of Berkeley. Collibra Governance, InfluxData is on there. InfluxDB's a time series database. And yeah, of course, Alex, if you bring that back up, you get a big group of red dots, right? That's the bad zone, I guess, which Sisense does vis, Yellowbrick Data is a NPP database. How should we interpret the red dots, Erik? I mean, is it necessarily a bad thing? Could it be misinterpreted? What's your take on that? >> Sure, well, let me just explain the definition of it first from a data science perspective, right? We're a data company first. So the gray dots that you're seeing that aren't named, that's the mean that's the average. So in order for you to be on this chart, you have to be at least one standard deviation above or below that average. So that gray is where we're saying, "Hey, this is where the lump of average comes in. This is where everyone normally stands." So you either have to be an outperformer or an underperformer to even show up in this analysis. So by definition, yes, the red dots are bad. You're at least one standard deviation below the average of your peers. It's not where you want to be. And if you're on the lower left, not only are you not performing well from a utilization or an actual usage rate, but people don't even know who you are. So that's a problem, obviously. And the VCs and the PEs out there that are backing these companies, they're the ones who mostly are interested in this data. >> Yeah. Oh, that's great explanation. Thank you for that. No, nice benchmarking there and yeah, you don't want to be in the red. All right, let's get into the next segment here. Here going to look at evaluation rates, adoption and the all important churn. First new evaluations. Let's bring up that slide. And Erik, take us through this. >> So essentially I just want to explain what evaluation means is that people will cite that they either plan to evaluate the company or they're currently evaluating. So that means we're aware of 'em and we are choosing to do a POC of them. And then we'll see later how that turns into utilization, which is what a company wants to see, awareness, evaluation, and then actually utilizing them. That's sort of the life cycle for these emerging companies. So what we're seeing here, again, with very high evaluation rates. H2O, we mentioned. SecurityScorecard jumped up again. Chargebee, Snyk, Salt Security, Armis. A lot of security names are up here, Aqua, Netskope, which God has been around forever. I still can't believe it's in an Emerging Technology Survey But so many of these names fall in data and security again, which is why we decided to pick those out Dave. And on the lower side, Vena, Acton, those unfortunately took the dubious award of the lowest evaluations in our survey, but I prefer to focus on the positive. So SecurityScorecard, again, real standout in this one, they're in a security assessment space, basically. They'll come in and assess for you how your security hygiene is. And it's an area of a real interest right now amongst our ITDM community. >> Yeah, I mean, I think those, and then Arctic Wolf is up there too. They're doing managed services. You had mentioned Netskope. Yeah, okay. All right, let's look at now adoption. These are the companies whose offerings are being used the most and are above that standard deviation in the green. Take us through this, Erik. >> Sure, yet again, what we're looking at is, okay, we went from awareness, we went to evaluation. Now it's about utilization, which means a survey respondent's going to state "Yes, we evaluated and we plan to utilize it" or "It's already in our enterprise and we're actually allocating further resources to it." Not surprising, again, a lot of open source, the reason why, it's free. So it's really easy to grow your utilization on something that's free. But as you and I both know, as Red Hat proved, there's a lot of money to be made once the open source is adopted, right? You need the governance, you need the security, you need the support wrapped around it. So here we're seeing Kubernetes, Postgres, Apache Kafka, Jenkins, Grafana. These are all open source based names. But if we're looking at names that are non open source, we're going to see Databricks, Automation Anywhere, Rubrik all have the highest mindshare. So these are the names, not surprisingly, all names that probably should have been public by now. Everyone's expecting an IPO imminently. These are the names that have the highest mindshare. If we talk about the highest utilization rates, again, Miro and Figma pop up, and I know they're not household names, but they are just dominant in this survey. These are applications that are meant for design software and, again, they're going after an Autodesk or a CAD or Adobe type of thing. It is just dominant how high the utilization rates are here, which again is something Adobe should be paying attention to. And then you'll see a little bit lower, but also interesting, we see Collibra again, we see Hugging Face again. And these are names that are obviously in the data governance, ML, AI side. So we're seeing a ton of data, a ton of security and Rubrik was interesting in this one, too, high utilization and high mindshare. We know how pervasive they are in the enterprise already. >> Erik, Alex, keep that up for a second, if you would. So yeah, you mentioned Rubrik. Cohesity's not on there. They're sort of the big one. We're going to talk about them in a moment. Puppet is interesting to me because you remember the early days of that sort of space, you had Puppet and Chef and then you had Ansible. Red Hat bought Ansible and then Ansible really took off. So it's interesting to see Puppet on there as well. Okay. So now let's look at the churn because this one is where you don't want to be. It's, of course, all red 'cause churn is bad. Take us through this, Erik. >> Yeah, definitely don't want to be here and I don't love to dwell on the negative. So we won't spend as much time. But to your point, there's one thing I want to point out that think it's important. So you see Rubrik in the same spot, but Rubrik has so many citations in our survey that it actually would make sense that they're both being high utilization and churn just because they're so well represented. They have such a high overall representation in our survey. And the reason I call that out is Cohesity. Cohesity has an extremely high churn rate here about 17% and unlike Rubrik, they were not on the utilization side. So Rubrik is seeing both, Cohesity is not. It's not being utilized, but it's seeing a high churn. So that's the way you can look at this data and say, "Hm." Same thing with Puppet. You noticed that it was on the other slide. It's also on this one. So basically what it means is a lot of people are giving Puppet a shot, but it's starting to churn, which means it's not as sticky as we would like. One that was surprising on here for me was Tanium. It's kind of jumbled in there. It's hard to see in the middle, but Tanium, I was very surprised to see as high of a churn because what I do hear from our end user community is that people that use it, like it. It really kind of spreads into not only vulnerability management, but also that endpoint detection and response side. So I was surprised by that one, mostly to see Tanium in here. Mural, again, was another one of those application design softwares that's seeing a very high churn as well. >> So you're saying if you're in both... Alex, bring that back up if you would. So if you're in both like MariaDB is for example, I think, yeah, they're in both. They're both green in the previous one and red here, that's not as bad. You mentioned Rubrik is going to be in both. Cohesity is a bit of a concern. Cohesity just brought on Sanjay Poonen. So this could be a go to market issue, right? I mean, 'cause Cohesity has got a great product and they got really happy customers. So they're just maybe having to figure out, okay, what's the right ideal customer profile and Sanjay Poonen, I guarantee, is going to have that company cranking. I mean they had been doing very well on the surveys and had fallen off of a bit. The other interesting things wondering the previous survey I saw Cvent, which is an event platform. My only reason I pay attention to that is 'cause we actually have an event platform. We don't sell it separately. We bundle it as part of our offerings. And you see Hopin on here. Hopin raised a billion dollars during the pandemic. And we were like, "Wow, that's going to blow up." And so you see Hopin on the churn and you didn't see 'em in the previous chart, but that's sort of interesting. Like you said, let's not kind of dwell on the negative, but you really don't. You know, churn is a real big concern. Okay, now we're going to drill down into two sectors, security and data. Where data comprises three areas, database and data warehousing, machine learning and AI and big data analytics. So first let's take a look at the security sector. Now this is interesting because not only is it a sector drill down, but also gives an indicator of how much money the firm has raised, which is the size of that bubble. And to tell us if a company is punching above its weight and efficiently using its venture capital. Erik, take us through this slide. Explain the dots, the size of the dots. Set this up please. >> Yeah. So again, the axis is still the same, net sentiment and mindshare, but what we've done this time is we've taken publicly available information on how much capital company is raised and that'll be the size of the circle you see around the name. And then whether it's green or red is basically saying relative to the amount of money they've raised, how are they doing in our data? So when you see a Netskope, which has been around forever, raised a lot of money, that's why you're going to see them more leading towards red, 'cause it's just been around forever and kind of would expect it. Versus a name like SecurityScorecard, which is only raised a little bit of money and it's actually performing just as well, if not better than a name, like a Netskope. OneTrust doing absolutely incredible right now. BeyondTrust. We've seen the issues with Okta, right. So those are two names that play in that space that obviously are probably getting some looks about what's going on right now. Wiz, we've all heard about right? So raised a ton of money. It's doing well on net sentiment, but the mindshare isn't as well as you'd want, which is why you're going to see a little bit of that red versus a name like Aqua, which is doing container and application security. And hasn't raised as much money, but is really neck and neck with a name like Wiz. So that is why on a relative basis, you'll see that more green. As we all know, information security is never going away. But as we'll get to later in the program, Dave, I'm not sure in this current market environment, if people are as willing to do POCs and switch away from their security provider, right. There's a little bit of tepidness out there, a little trepidation. So right now we're seeing overall a slight pause, a slight cooling in overall evaluations on the security side versus historical levels a year ago. >> Now let's stay on here for a second. So a couple things I want to point out. So it's interesting. Now Snyk has raised over, I think $800 million but you can see them, they're high on the vertical and the horizontal, but now compare that to Lacework. It's hard to see, but they're kind of buried in the middle there. That's the biggest dot in this whole thing. I think I'm interpreting this correctly. They've raised over a billion dollars. It's a Mike Speiser company. He was the founding investor in Snowflake. So people watch that very closely, but that's an example of where they're not punching above their weight. They recently had a layoff and they got to fine tune things, but I'm still confident they they're going to do well. 'Cause they're approaching security as a data problem, which is probably people having trouble getting their arms around that. And then again, I see Arctic Wolf. They're not red, they're not green, but they've raised fair amount of money, but it's showing up to the right and decent level there. And a couple of the other ones that you mentioned, Netskope. Yeah, they've raised a lot of money, but they're actually performing where you want. What you don't want is where Lacework is, right. They've got some work to do to really take advantage of the money that they raised last November and prior to that. >> Yeah, if you're seeing that more neutral color, like you're calling out with an Arctic Wolf, like that means relative to their peers, this is where they should be. It's when you're seeing that red on a Lacework where we all know, wow, you raised a ton of money and your mindshare isn't where it should be. Your net sentiment is not where it should be comparatively. And then you see these great standouts, like Salt Security and SecurityScorecard and Abnormal. You know they haven't raised that much money yet, but their net sentiment's higher and their mindshare's doing well. So those basically in a nutshell, if you're a PE or a VC and you see a small green circle, then you're doing well, then it means you made a good investment. >> Some of these guys, I don't know, but you see these small green circles. Those are the ones you want to start digging into and maybe help them catch a wave. Okay, let's get into the data discussion. And again, three areas, database slash data warehousing, big data analytics and ML AI. First, we're going to look at the database sector. So Alex, thank you for bringing that up. Alright, take us through this, Erik. Actually, let me just say Postgres SQL. I got to ask you about this. It shows some funding, but that actually could be a mix of EDB, the company that commercializes Postgres and Postgres the open source database, which is a transaction system and kind of an open source Oracle. You see MariaDB is a database, but open source database. But the companies they've raised over $200 million and they filed an S-4. So Erik looks like this might be a little bit of mashup of companies and open source products. Help us understand this. >> Yeah, it's tough when you start dealing with the open source side and I'll be honest with you, there is a little bit of a mashup here. There are certain names here that are a hundred percent for profit companies. And then there are others that are obviously open source based like Redis is open source, but Redis Labs is the one trying to monetize the support around it. So you're a hundred percent accurate on this slide. I think one of the things here that's important to note though, is just how important open source is to data. If you're going to be going to any of these areas, it's going to be open source based to begin with. And Neo4j is one I want to call out here. It's not one everyone's familiar with, but it's basically geographical charting database, which is a name that we're seeing on a net sentiment side actually really, really high. When you think about it's the third overall net sentiment for a niche database play. It's not as big on the mindshare 'cause it's use cases aren't as often, but third biggest play on net sentiment. I found really interesting on this slide. >> And again, so MariaDB, as I said, they filed an S-4 I think $50 million in revenue, that might even be ARR. So they're not huge, but they're getting there. And by the way, MariaDB, if you don't know, was the company that was formed the day that Oracle bought Sun in which they got MySQL and MariaDB has done a really good job of replacing a lot of MySQL instances. Oracle has responded with MySQL HeatWave, which was kind of the Oracle version of MySQL. So there's some interesting battles going on there. If you think about the LAMP stack, the M in the LAMP stack was MySQL. And so now it's all MariaDB replacing that MySQL for a large part. And then you see again, the red, you know, you got to have some concerns about there. Aerospike's been around for a long time. SingleStore changed their name a couple years ago, last year. Yellowbrick Data, Fire Bolt was kind of going after Snowflake for a while, but yeah, you want to get out of that red zone. So they got some work to do. >> And Dave, real quick for the people that aren't aware, I just want to let them know that we can cut this data with the public company data as well. So we can cross over this with that because some of these names are competing with the larger public company names as well. So we can go ahead and cross reference like a MariaDB with a Mongo, for instance, or of something of that nature. So it's not in this slide, but at another point we can certainly explain on a relative basis how these private names are doing compared to the other ones as well. >> All right, let's take a quick look at analytics. Alex, bring that up if you would. Go ahead, Erik. >> Yeah, I mean, essentially here, I can't see it on my screen, my apologies. I just kind of went to blank on that. So gimme one second to catch up. >> So I could set it up while you're doing that. You got Grafana up and to the right. I mean, this is huge right. >> Got it thank you. I lost my screen there for a second. Yep. Again, open source name Grafana, absolutely up and to the right. But as we know, Grafana Labs is actually picking up a lot of speed based on Grafana, of course. And I think we might actually hear some noise from them coming this year. The names that are actually a little bit more disappointing than I want to call out are names like ThoughtSpot. It's been around forever. Their mindshare of course is second best here but based on the amount of time they've been around and the amount of money they've raised, it's not actually outperforming the way it should be. We're seeing Moogsoft obviously make some waves. That's very high net sentiment for that company. It's, you know, what, third, fourth position overall in this entire area, Another name like Fivetran, Matillion is doing well. Fivetran, even though it's got a high net sentiment, again, it's raised so much money that we would've expected a little bit more at this point. I know you know this space extremely well, but basically what we're looking at here and to the bottom left, you're going to see some names with a lot of red, large circles that really just aren't performing that well. InfluxData, however, second highest net sentiment. And it's really pretty early on in this stage and the feedback we're getting on this name is the use cases are great, the efficacy's great. And I think it's one to watch out for. >> InfluxData, time series database. The other interesting things I just noticed here, you got Tamer on here, which is that little small green. Those are the ones we were saying before, look for those guys. They might be some of the interesting companies out there and then observe Jeremy Burton's company. They do observability on top of Snowflake, not green, but kind of in that gray. So that's kind of cool. Monte Carlo is another one, they're sort of slightly green. They are doing some really interesting things in data and data mesh. So yeah, okay. So I can spend all day on this stuff, Erik, phenomenal data. I got to get back and really dig in. Let's end with machine learning and AI. Now this chart it's similar in its dimensions, of course, except for the money raised. We're not showing that size of the bubble, but AI is so hot. We wanted to cover that here, Erik, explain this please. Why TensorFlow is highlighted and walk us through this chart. >> Yeah, it's funny yet again, right? Another open source name, TensorFlow being up there. And I just want to explain, we do break out machine learning, AI is its own sector. A lot of this of course really is intertwined with the data side, but it is on its own area. And one of the things I think that's most important here to break out is Databricks. We started to cover Databricks in machine learning, AI. That company has grown into much, much more than that. So I do want to state to you Dave, and also the audience out there that moving forward, we're going to be moving Databricks out of only the MA/AI into other sectors. So we can kind of value them against their peers a little bit better. But in this instance, you could just see how dominant they are in this area. And one thing that's not here, but I do want to point out is that we have the ability to break this down by industry vertical, organization size. And when I break this down into Fortune 500 and Fortune 1000, both Databricks and Tensorflow are even better than you see here. So it's quite interesting to see that the names that are succeeding are also succeeding with the largest organizations in the world. And as we know, large organizations means large budgets. So this is one area that I just thought was really interesting to point out that as we break it down, the data by vertical, these two names still are the outstanding players. >> I just also want to call it H2O.ai. They're getting a lot of buzz in the marketplace and I'm seeing them a lot more. Anaconda, another one. Dataiku consistently popping up. DataRobot is also interesting because all the kerfuffle that's going on there. The Cube guy, Cube alum, Chris Lynch stepped down as executive chairman. All this stuff came out about how the executives were taking money off the table and didn't allow the employees to participate in that money raising deal. So that's pissed a lot of people off. And so they're now going through some kind of uncomfortable things, which is unfortunate because DataRobot, I noticed, we haven't covered them that much in "Breaking Analysis", but I've noticed them oftentimes, Erik, in the surveys doing really well. So you would think that company has a lot of potential. But yeah, it's an important space that we're going to continue to watch. Let me ask you Erik, can you contextualize this from a time series standpoint? I mean, how is this changed over time? >> Yeah, again, not show here, but in the data. I'm sorry, go ahead. >> No, I'm sorry. What I meant, I should have interjected. In other words, you would think in a downturn that these emerging companies would be less interesting to buyers 'cause they're more risky. What have you seen? >> Yeah, and it was interesting before we went live, you and I were having this conversation about "Is the downturn stopping people from evaluating these private companies or not," right. In a larger sense, that's really what we're doing here. How are these private companies doing when it comes down to the actual practitioners? The people with the budget, the people with the decision making. And so what I did is, we have historical data as you know, I went back to the Emerging Technology Survey we did in November of 21, right at the crest right before the market started to really fall and everything kind of started to fall apart there. And what I noticed is on the security side, very much so, we're seeing less evaluations than we were in November 21. So I broke it down. On cloud security, net sentiment went from 21% to 16% from November '21. That's a pretty big drop. And again, that sentiment is our one aggregate metric for overall positivity, meaning utilization and actual evaluation of the name. Again in database, we saw it drop a little bit from 19% to 13%. However, in analytics we actually saw it stay steady. So it's pretty interesting that yes, cloud security and security in general is always going to be important. But right now we're seeing less overall net sentiment in that space. But within analytics, we're seeing steady with growing mindshare. And also to your point earlier in machine learning, AI, we're seeing steady net sentiment and mindshare has grown a whopping 25% to 30%. So despite the downturn, we're seeing more awareness of these companies in analytics and machine learning and a steady, actual utilization of them. I can't say the same in security and database. They're actually shrinking a little bit since the end of last year. >> You know it's interesting, we were on a round table, Erik does these round tables with CISOs and CIOs, and I remember one time you had asked the question, "How do you think about some of these emerging tech companies?" And one of the executives said, "I always include somebody in the bottom left of the Gartner Magic Quadrant in my RFPs. I think he said, "That's how I found," I don't know, it was Zscaler or something like that years before anybody ever knew of them "Because they're going to help me get to the next level." So it's interesting to see Erik in these sectors, how they're holding up in many cases. >> Yeah. It's a very important part for the actual IT practitioners themselves. There's always contracts coming up and you always have to worry about your next round of negotiations. And that's one of the roles these guys play. You have to do a POC when contracts come up, but it's also their job to stay on top of the new technology. You can't fall behind. Like everyone's a software company. Now everyone's a tech company, no matter what you're doing. So these guys have to stay in on top of it. And that's what this ETS can do. You can go in here and look and say, "All right, I'm going to evaluate their technology," and it could be twofold. It might be that you're ready to upgrade your technology and they're actually pushing the envelope or it simply might be I'm using them as a negotiation ploy. So when I go back to the big guy who I have full intentions of writing that contract to, at least I have some negotiation leverage. >> Erik, we got to leave it there. I could spend all day. I'm going to definitely dig into this on my own time. Thank you for introducing this, really appreciate your time today. >> I always enjoy it, Dave and I hope everyone out there has a great holiday weekend. Enjoy the rest of the summer. And, you know, I love to talk data. So anytime you want, just point the camera on me and I'll start talking data. >> You got it. I also want to thank the team at ETR, not only Erik, but Darren Bramen who's a data scientist, really helped prepare this data, the entire team over at ETR. I cannot tell you how much additional data there is. We are just scratching the surface in this "Breaking Analysis". So great job guys. I want to thank Alex Myerson. Who's on production and he manages the podcast. Ken Shifman as well, who's just coming back from VMware Explore. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor in chief over at SiliconANGLE. Does some great editing for us. Thank you. All of you guys. Remember these episodes, they're all available as podcast, wherever you listen. All you got to do is just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me to get in touch david.vellante@siliconangle.com. You can DM me at dvellante or comment on my LinkedIn posts and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for Erik Bradley and The Cube Insights powered by ETR. Thanks for watching. Be well. And we'll see you next time on "Breaking Analysis". (upbeat music)

Published Date : Sep 7 2022

SUMMARY :

bringing you data driven it's called the Emerging Great to see you too, Dave, so much in the mainstream, not only for the ITDMs themselves It is the heart of innovation So the net sentiment is a very So a lot of names that we And then of course you have AnyScale, That's the bad zone, I guess, So the gray dots that you're rates, adoption and the all And on the lower side, Vena, Acton, in the green. are in the enterprise already. So now let's look at the churn So that's the way you can look of dwell on the negative, So again, the axis is still the same, And a couple of the other And then you see these great standouts, Those are the ones you want to but Redis Labs is the one And by the way, MariaDB, So it's not in this slide, Alex, bring that up if you would. So gimme one second to catch up. So I could set it up but based on the amount of time Those are the ones we were saying before, And one of the things I think didn't allow the employees to here, but in the data. What have you seen? the market started to really And one of the executives said, And that's one of the Thank you for introducing this, just point the camera on me We are just scratching the surface

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ErikPERSON

0.99+

Alex MyersonPERSON

0.99+

Ken ShifmanPERSON

0.99+

Sanjay PoonenPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Erik BradleyPERSON

0.99+

November 21DATE

0.99+

Darren BramenPERSON

0.99+

AlexPERSON

0.99+

Cheryl KnightPERSON

0.99+

PostgresORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

NetskopeORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

Rob HofPERSON

0.99+

FivetranORGANIZATION

0.99+

$50 millionQUANTITY

0.99+

21%QUANTITY

0.99+

Chris LynchPERSON

0.99+

19%QUANTITY

0.99+

Jeremy BurtonPERSON

0.99+

$800 millionQUANTITY

0.99+

6,000QUANTITY

0.99+

OracleORGANIZATION

0.99+

Redis LabsORGANIZATION

0.99+

November '21DATE

0.99+

ETRORGANIZATION

0.99+

FirstQUANTITY

0.99+

25%QUANTITY

0.99+

last yearDATE

0.99+

OneTrustORGANIZATION

0.99+

two dimensionsQUANTITY

0.99+

two groupsQUANTITY

0.99+

November of 21DATE

0.99+

bothQUANTITY

0.99+

BostonLOCATION

0.99+

more than 400 companiesQUANTITY

0.99+

Kristen MartinPERSON

0.99+

MySQLTITLE

0.99+

MoogsoftORGANIZATION

0.99+

The CubeORGANIZATION

0.99+

thirdQUANTITY

0.99+

GrafanaORGANIZATION

0.99+

H2OORGANIZATION

0.99+

Mike SpeiserPERSON

0.99+

david.vellante@siliconangle.comOTHER

0.99+

secondQUANTITY

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

28%QUANTITY

0.99+

16%QUANTITY

0.99+

SecondQUANTITY

0.99+

IBM DataOps in Action Panel | IBM DataOps 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi buddy welcome to this special noob digital event where we're focusing in on data ops data ops in Acton with generous support from friends at IBM let me set up the situation here there's a real problem going on in the industry and that's that people are not getting the most out of their data data is plentiful but insights perhaps aren't what's the reason for that well it's really a pretty complicated situation for a lot of organizations there's data silos there's challenges with skill sets and lack of skills there's tons of tools out there sort of a tools brief the data pipeline is not automated the business lines oftentimes don't feel as though they own the data so that creates some real concerns around data quality and a lot of finger-point quality the opportunity here is to really operationalize the data pipeline and infuse AI into that equation and really attack their cost-cutting and revenue generation opportunities that are there in front of you think about this virtually every application this decade is going to be infused with AI if it's not it's not going to be competitive and so we have organized a panel of great practitioners to really dig in to these issues first I want to introduce Victoria Stassi with who's an industry expert in a top at Northwestern you two'll very great to see you again thanks for coming on excellent nice to see you as well and Caitlin Alfre is the director of AI a vai accelerator and also part of the peak data officers organization at IBM who has actually eaten some of it his own practice what a creep let me say it that way Caitlin great to see you again and Steve Lewis good to see you again see vice president director of management associated a bank and Thompson thanks for coming on thanks Dave make speaker alright guys so you heard my authority with in terms of operationalizing getting the most insight hey data is wonderful insights aren't but getting insight in real time is critical in this decade each of you is a sense as to where you are on that journey or Victoria your taste because you're brand new to Northwestern Mutual but you have a lot of deep expertise in in health care and manufacturing financial services but where you see just the general industry climate and we'll talk about the journeys that you are on both personally and professionally so it's all fair sure I think right now right again just me going is you need to have speech insight right so as I experienced going through many organizations are all facing the same challenges today and a lot of those pounds is hard where do my to live is my data trust meaning has a bank curated has been Clinton's visit qualified has a big a lot of that is ready what we see often happen is businesses right they know their KPIs they know their business metrics but they can't find where that data Linda Barragan asked there's abundant data disparity all over the place but it is replicated because it's not well managed it's a lot of what governance in the platform of pools that governance to speak right offer fact it organizations pay is just that piece of it I can tell you where data is I can tell you what's trusted that when you can quickly access information and bring back answers to business questions that is one answer not many answers leaving the business to question what's the right path right which is the correct answer which which way do I go at the executive level that's the biggest challenge where we want the industry to go moving forward right is one breaking that down along that information to be published quickly and to an emailing data virtualization a lot of what you see today is most businesses right it takes time to build out large warehouses at an enterprise level we need to pivot quicker so a lot of what businesses are doing is we're leaning them towards taking advantage of data virtualization allowing them to connect to these data sources right to bring that information back quickly so they don't have to replicate that information across different systems or different applications right and then to be able to provide that those answers back quickly also allowing for seamless access to from the analysts that are running running full speed right try and find the answers as quickly as they find great okay and I want to get into that sort of how news Steve let me go to you one of the things that we talked about earlier was just infusing this this mindset of a data cult and thinking about data as a service so talk a little bit about how you got started what was the starting NICUs through that sure I think the biggest thing for us there is to change that mindset from data being just for reporting or things that have happened in the past to do some insights on us and some data that already existed well we've tried to shift the mentality there is to start to use data and use that into our actual applications so that we're providing those insight in real time through the applications as they're consumed helping with customer experience helping with our personalization and an optimization of our application the way we've started down that path or kind of the journey that we're still on was to get the foundation laid birch so part of that has been making sure we have access to all that data whether it's through virtualization like vic talked about or whether it's through having more of the the data selected in a data like that that where we have all of that foundational data available as opposed to waiting for people to ask for it that's been the biggest culture shift for us is having that availability of data to be ready to be able to provide those insights as opposed to having to make the businesses or the application or asked for that day Oh Kailyn when I first met into pulp andari the idea wobble he paid up there yeah I was asking him okay where does a what's the role of that at CBO and and he mentioned a number of things but two of the things that stood out is you got to understand how data affect the monetization of your company that doesn't mean you know selling the data what role does it play and help cut cost or ink revenue or productivity or no customer service etc the other thing he said was you've got a align with the lines of piss a little sounded good and this is several years ago and IBM took it upon itself Greek its own champagne I was gonna say you know dogfooding whatever but it's not easy just flip a switch and an infuse a I and automate the data pipeline you guys had to go you know some real of pain to get there and you did you were early on you took some arrows and now you're helping your customers better on thin debt but talk about some of the use cases that where you guys have applied this obviously the biggest organization you know one of the biggest in the world the real challenge is they're sure I'm happy today you know we've been on this journey for about four years now so we stood up our first book to get office 2016 and you're right it was all about getting what data strategy offered and executed internally and we want to be very transparent because as you've mentioned you know a lot of challenges possible think differently about the value and so as we wrote that data strategy at that time about coming to enterprise and then we quickly of pivoted to see the real opportunity and value of infusing AI across all of our needs were close to your question on a couple of specific use cases I'd say you know we invested that time getting that platform built and implemented and then we were able to take advantage of that one particular example that I've been really excited about I have a practitioner on my team who's a supply chain expert and a couple of years ago he started building out supply chain solution so that we can better mitigate our risk in the event of a natural disaster like the earthquake hurricane anywhere around the world and be cuz we invest at the time and getting the date of pipelines right getting that all of that were created and cleaned and the quality of it we were able to recently in recent weeks add the really critical Kovach 19 data and deliver that out to our employees internally for their preparation purposes make that available to our nonprofit partners and now we're starting to see our first customers take advantage too with the health and well-being of their employees mine so that's you know an example I think where and I'm seeing a lot of you know my clients I work with they invest in the data and AI readiness and then they're able to take advantage of all of that work work very quickly in an agile fashion just spin up those out well I think one of the keys there who Kaelin is that you know we can talk about that in a covet 19 contact but it's that's gonna carry through that that notion of of business resiliency is it's gonna live on you know in this post pivot world isn't it absolutely I think for all of us the importance of investing in the business continuity and resiliency type work so that we know what to do in the event of either natural disaster or something beyond you know it'll be grounded in that and I think it'll only become more important for us to be able to act quickly and so the investment in those platforms and approach that we're taking and you know I see many of us taking will really be grounded in that resiliency so Vic and Steve I want to dig into this a little bit because you know we use this concept of data op we're stealing from DevOps and there are similarities but there are also differences now let's talk about the data pipeline if you think about the data pipeline as a sort of quasi linear process where you're investing data and you might be using you know tools but whether it's Kafka or you know we have a favorite who will you have and then you're transforming that that data and then you got a you know discovery you got to do some some exploration you got to figure out your metadata catalog and then you're trying to analyze that data to get some insights and then you ultimately you want to operationalize it so you know and and you could come up with your own data pipeline but generally that sort of concept is is I think well accepted there's different roles and unlike DevOps where it might be the same developer who's actually implementing security policies picking it the operations in in data ops there might be different roles and fact very often are there's data science there's may be an IT role there's data engineering there's analysts etc so Vic I wonder if you could you could talk about the challenges in in managing and automating that data pipeline applying data ops and how practitioners can overcome them yeah I would say a perfect example would be a client that I was just recently working for where we actually took a team and we built up a team using agile methodologies that framework right we're rapidly ingesting data and then proving out data's fit for purpose right so often now we talk a lot about big data and that is really where a lot of industries are going they're trying to add an enrichment to their own data sources so what they're doing is they're purchasing these third-party data sets so in doing so right you make that initial purchase but what many companies are doing today is they have no real way to vet that so they'll purchase the information they aren't going to vet it upfront they're going to bring it into an environment there it's going to take them time to understand if the data is of quality or not and by the time they do typically the sales gone and done and they're not going to ask for anything back but we were able to do it the most recent claim was use an instructure data source right bring that and ingest that with modelers using this agile team right and within two weeks we were able to bring the data in from the third-party vendor what we considered rapid prototyping right be able to profile the data understand if the data is of quality or not and then quickly figure out that you know what the data's not so in doing that we were able to then contact the vendor back tell them you know it sorry the data set up to snuff we'd like our money back we're not gonna go forward with it that's enabling businesses to be smarter with what they're doing with 30 new purchases today as many businesses right now um as much as they want to rely on their own data right they actually want to rely on cross the data from third-party sources and that's really what data Ops is allowing us to do it's allowing us to think at a broader a higher level right what to bring the information what structures can we store them in that they don't necessarily have to be modeled because a modeler is great right but if we have to take time to model all the information before we even know we want to use it that's gonna slow the process now and that's slowing the business down the business is looking for us to speed up all of our processes a lot of what we heard in the past raised that IP tends to slow us down and that's where we're trying to change that perception in the industry is no we're actually here to speed you up we have all the tools and technologies to do so and they're only getting better I would say also on data scientists right that's another piece of the pie for us if we can bring the information in and we can quickly catalog it in a metadata and burn it bring in the information in the backend data data assets right and then supply that information back to scientists gone are the days where scientists are going and asking for connections to all these different data sources waiting days for access requests to be approved just to find out that once they figure out how it with them the relationship diagram right the design looks like in that back-end database how to get to it write the code to get to it and then figure out this is not the information I need that Sally next to me right fold me the wrong information that's where the catalog comes in that's where due to absent data governance having that catalog that metadata management platform available to you they can go into a catalog without having to request access to anything quickly and within five minutes they can see the structures what if the tables look like what did the fields look like are these are these the metrics I need to bring back answers to the business that's data apps it's allowing us to speed up all of that information you know taking stuff that took months now down two weeks down two days down two hours so Steve I wonder if you could pick up on that and just help us understand what data means you we talked about earlier in our previous conversation I mentioned it upfront is this notion of you know the demand for for data access is it was through the roof and and you've gone from that to sort of more of a self-service environment where it's not IT owning the data it's really the businesses owning the data but what what is what is all this data op stuff meaning in your world sure I think it's very similar it's it's how do we enable and get access to that clicker showing the right controls showing the right processes and and building that scalability and agility and into all of it so that we're we're doing this at scale it's much more rapidly available we can discover new data separately determine if it's right or or more importantly if it's wrong similar to what what Vic described it's it's how do we enable the business to make those right decisions on whether or not they're going down the right path whether they're not the catalog is a big part of that we've also introduced a lot of frameworks around scale so just the ability to rapidly ingest data and make that available has been a key for us we've also focused on a prototyping environment so that sandbox mentality of how do we rapidly stand those up for users and and still provide some controls but have provide that ability for people to do that that exploration what we're finding is that by providing the platform and and the foundational layers that were we're getting the use cases to sort of evolve and come out of that as opposed to having the use cases prior to then go build things from we're shifting the mentality within the organization to say we don't know what we need yet let's let's start to explore that's kind of that data scientist mentality and culture it more of a way of thinking as opposed to you know an actual project or implement well I think that that cultural aspect is important of course Caitlin you guys are an AI company or at least that you know part of what you do but you know you've you for four decades maybe centuries you've been organized around different things by factoring plant but sales channel or whatever it is but-but-but-but how has the chief data officer organization within IBM been able to transform itself and and really infuse a data culture across the entire company one of the approaches you know we've taken and we talk about sort of the blueprint to drive AI transformation so that we can achieve and deliver these really high value use cases we talked about the data the technology which we've just pressed on with organizational piece of it duration are so important the change management enabling and equipping our data stewards I'll give one a civic example that I've been really excited about when we were building our platform and starting to pull districting structured unstructured pull it in our ADA stewards are spending a lot of time manually tagging and creating business metadata about that data and we identified that that was a real pain point costing us a lot of money valuable resources so we started to automate the metadata and doing that in partnership with our deep learning practitioners and some of the models that they were able to build that capability we pushed out into our contacts our product last year and one of the really exciting things for me to see is our data stewards who be so value exporters and the skills that they bring have reported that you know it's really changed the way they're able to work it's really sped up their process it's enabled them to then move on to higher value to abilities and and business benefits so they're very happy from an organizational you know completion point of view so I think there's ways to identify those use cases particularly for taste you know we drove some significant productivity savings we also really empowered and hold our data stewards we really value to make their job you know easier more efficient and and help them move on to things that they are more you know excited about doing so I think that's that you know another example of approaching taken yes so the cultural piece the people piece is key we talked a little bit about the process I want to get into a little bit into the tech Steve I wonder if you could tell us you know what's it what's the tech we have this bevy of tools I mentioned a number of them upfront you've got different data stores you've got open source pooling you've got IBM tooling what are the critical components of the technology that people should be thinking about tapping in architecture from ingestion perspective we're trying to do a lot of and a Python framework and scaleable ingestion pipe frameworks on the catalog side I think what we've done is gone with IBM PAC which provides a platform for a lot of these tools to stay integrated together so things from the discovery of data sources the cataloging the documentation of those data sources and then all the way through the actual advanced analytics and Python models and our our models and the open source ID combined with the ability to do some data prep and refinery work having that all in an integrated platform was a key to us for us that the rollout and of more of these tools in bulk as opposed to having the point solutions so that's been a big focus area for us and then on the analytic side and the web versus IDE there's a lot of different components you can go into whether it's meal soft whether it's AWS and some of the native functionalities out there you mentioned before Kafka and Anissa streams and different streaming technologies those are all the ones that are kind of in our Ketil box that we're starting to look at so and one of the keys here is we're trying to make decisions in as close to real time as possible as opposed to the business having to wait you know weeks or months and then by the time they get insights it's late and really rearview mirror so Vic your focus you know in your career has been a lot on data data quality governance master data management data from a data quality standpoint as well what are some of the key tools that you're familiar with that you've used that really have enabled you operationalize that data pipeline you know I would say I'm definitely the IBM tools I have the most experience with that also informatica though as well those are to me the two top players IBM definitely has come to the table with a suite right like Steve said cloud pack for data is really a one-stop shop so that's allowing that quick seamless access for business user versus them having to go into some of the previous versions that IBM had rolled out where you're going into different user interfaces right to find your information and that can become clunky it can add the process it can also create almost like a bad taste and if in most people's mouths because they don't want to navigate from system to system to system just to get their information so cloud pack to me definitely brings everything to the table in one in a one-stop shop type of environment in for me also though is working on the same thing and I would tell you that they haven't come up with a solution that really comes close to what IBM is done with cloud pack for data I'd be interested to see if they can bring that on the horizon but really IBM suite of tools allows for profiling follow the analytics write metadata management access to db2 warehouse on cloud those are the tools that I've worked in my past to implement as well as cloud object store to bring all that together to provide that one stop that at Northwestern right we're working right now with belieber I think calibra is a great set it pool are great garments catalog right but that's really what it's truly made for is it's a governance catalog you have to bring some other pieces to the table in order for it to serve up all the cloud pack does today which is the advanced profiling the data virtualization that cloud pack enables today the machine learning at the level where you can actually work with our and Python code and you put our notebooks inside of pack that's some of this the pieces right that are missing in some of the under vent other vendor schools today so one of the things that you're hearing here is the theme of openness others addition we've talked about a lot of tools and not IBM tools all IBM tools there there are many but but people want to use what they want to use so Kaitlin from an IBM perspective what's your commitment the openness number one but also to you know we talked a lot about cloud packs but to simplify the experience for your client well and I thank Stephen Victoria for you know speaking to their experience I really appreciate feedback and part of our approach has been to really take one the challenges that we've had I mentioned some of the capabilities that we brought forward in our cloud platform data product one being you know automating metadata generation and that was something we had to solve for our own data challenges in need so we will continue to source you know our use cases from and grounded from a practitioner perspective of what we're trying to do and solve and build and the approach we've really been taking is co-creation line and that we roll these capability about the product and work with our customers like Stephen light victorious you really solicit feedback to product route our dev teams push that out and just be very open and transparent I mean we want to deliver a seamless experience we want to do it in partnership and continue to solicit feedback and improve and roll out so no I think that will that has been our approach will continue to be and really appreciate the partnerships that we've been able to foster so we don't have a ton of time but I want to go to practitioners on the panel and ask you about key key performance indicators when I think about DevOps one of the things that we're measuring is the elapsed time the deploy applications start finished where we're measuring the amount of rework that has to be done the the quality of the deliverable what are the KPIs Victoria that are indicators of success in operationalizing date the data pipeline well I would definitely say your ability to deliver quickly right so how fast can you deliver is that is that quicker than what you've been able to do in the past right what is the user experience like right so have you been able to measure what what the amount of time was right that users are spending to bring information to the table in the past versus have you been able to reduce that time to delivery right of information business answers to business questions those are the key performance indicators to me that tell you that the suite that we've put in place today right it's providing information quickly I can get my business answers quickly but quicker than I could before and the information is accurate so being able to measure is it quality that I've been giving that I've given back or is this not is it the wrong information and yet I've got to go back to the table and find where I need to gather that from from somewhere else that to me tells us okay you know what the tools we've put in place today my teams are working quicker they're answering the questions they need to accurately that is when we know we're on the right path Steve anything you add to that I think she covered a lot of the people components the around the data quality scoring right for all the different data attributes coming up with a metric around how to measure that and and then showing that trend over time to show that it's getting better the other one that we're doing is just around overall date availability how how much data are we providing to our users and and showing that trend so when I first started you know we had somewhere in the neighborhood of 500 files that had been brought into the warehouse and and had been published and available in the neighborhood of a couple thousand fields we've grown that into weave we have thousands of cables now available so it's it's been you know hundreds of percent in scale as far as just the availability of that data how much is out there how much is is ready and available for for people to just dig in and put into their their analytics and their models and get those back into the other application so that's another key metric that we're starting to track as well so last question so I said at the top that every application is gonna need to be infused with AI this decade otherwise that application not going to be as competitive as it could be and so for those that are maybe stuck in their journey don't really know where to get started I'll start with with Caitlin and go to Victoria and then and then even bring us home what advice would you give the people that need to get going on this my advice is I think you pull the folks that are either producing or accessing your data and figure out what the rate is between I mentioned some of the data management challenges we were seeing this these processes were taking weeks and prone to error highly manual so part was ripe for AI project so identifying those use cases I think that are really causing you know the most free work and and manual effort you can move really quickly and as you build this platform out you're able to spin those up on an accelerated fashion I think identifying that and figuring out the business impact are able to drive very early on you can get going and start really seeing the value great yeah I would actually say kids I hit it on the head but I would probably add to that right is the first and foremost in my opinion right the importance around this is data governance you need to implement a data governance at an enterprise level many organizations will do it but they'll have silos of governance you really need an interface I did a government's platform that consists of a true framework of an operational model model charters right you have data domain owners data domain stewards data custodians all that needs to be defined and while that may take some work in in the beginning right the payoff down the line is that much more it's it it's allowing your business to truly own the data once they own the data and they take part in classifying the data assets for technologists and for analysts right you can start to eliminate some of the technical debt that most organizations have acquired today they can start to look at what are some of the systems that we can turn off what are some of the systems that we see valium truly build out a capability matrix we can start mapping systems right to capabilities and start to say where do we have wares or redundancy right what can we get rid of that's the first piece of it and then the second piece of it is really leveraging the tools that are out there today the IBM tools some of the other tools out there as well that enable some of the newer next-generation capabilities like unit nai right for example allowing automation for automation which right for all of us means that a lot of the analysts that are in place today they can access the information quicker they can deliver the information accurately like we've been talking about because it's been classified that pre works being done it's never too late to start but once you start that it just really acts as a domino effect to everything else where you start to see everything else fall into place all right thank you and Steve bring us on but advice for your your peers that want to get started sure I think the key for me too is like like those guys have talked about I think all everything they said is valid and accurate thing I would add is is from a starting perspective if you haven't started start right don't don't try to overthink that over plan it it started just do something and and and start the show that progress and value the use cases will come even if you think you're not there yet it's amazing once you have the national components there how some of these things start to come out of the woodwork so so it started it going may have it have that iterative approach to this and an open mindset it's encourage exploration and enablement look your organization in the eye to say why are their silos why do these things like this what are our problem what are the things getting in our way and and focus and tackle those those areas as opposed to trying to put up more rails and more boundaries and kind of encourage that silo mentality really really look at how do you how do you focus on that enablement and then the last comment would just be on scale everything should be focused on scale what you think is a one-time process today you're gonna do it again we've all been there you're gonna do it a thousand times again so prepare for that prepare forever that you're gonna do everything a thousand times and and start to instill that culture within your organization a great advice guys data bringing machine intelligence an AI to really drive insights and scaling with a cloud operating model no matter where that data live it's really great to have have three such knowledgeable practitioners Caitlyn Toria and Steve thanks so much for coming on the cube and helping support this panel all right and thank you for watching everybody now remember this panel was part of the raw material that went into a crowd chat that we hosted on May 27th Crouch at net slash data ops so go check that out this is Dave Volante for the cube thanks for watching [Music]

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Steve LewisPERSON

0.99+

Caitlyn ToriaPERSON

0.99+

StevePERSON

0.99+

Linda BarraganPERSON

0.99+

Dave VolantePERSON

0.99+

two weeksQUANTITY

0.99+

Victoria StassiPERSON

0.99+

Caitlin AlfrePERSON

0.99+

two hoursQUANTITY

0.99+

VicPERSON

0.99+

two daysQUANTITY

0.99+

May 27thDATE

0.99+

500 filesQUANTITY

0.99+

IBMORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

PythonTITLE

0.99+

five minutesQUANTITY

0.99+

30 new purchasesQUANTITY

0.99+

last yearDATE

0.99+

CaitlinPERSON

0.99+

ClintonPERSON

0.99+

first pieceQUANTITY

0.99+

first bookQUANTITY

0.99+

DavePERSON

0.99+

second pieceQUANTITY

0.99+

BostonLOCATION

0.99+

SallyPERSON

0.99+

todayDATE

0.99+

AWSORGANIZATION

0.99+

hundreds of percentQUANTITY

0.98+

Stephen VictoriaPERSON

0.98+

oneQUANTITY

0.98+

Northwestern MutualORGANIZATION

0.98+

KaitlinPERSON

0.97+

four decadesQUANTITY

0.97+

firstQUANTITY

0.97+

two top playersQUANTITY

0.97+

several years agoDATE

0.96+

about four yearsQUANTITY

0.96+

first customersQUANTITY

0.95+

tons of toolsQUANTITY

0.95+

KailynPERSON

0.95+

bothQUANTITY

0.95+

twoQUANTITY

0.94+

NorthwesternORGANIZATION

0.94+

NorthwesternLOCATION

0.93+

eachQUANTITY

0.91+

CrouchPERSON

0.91+

CBOORGANIZATION

0.91+

DevOpsTITLE

0.91+

two ofQUANTITY

0.89+

AIORGANIZATION

0.87+

thingsQUANTITY

0.87+

three such knowledgeable practitionersQUANTITY

0.87+

Steven Lueck, Associated Bank | IBM DataOps in Action


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi Bri welcome back this is Dave Volante and welcome to this special presentation made possible by IBM we're talking about data op data ops in Acton Steve Lucas here he's the senior vice president and director of data management at Associated Bank be great to see how are things going and in Wisconsin all safe we're doing well we're staying safe staying healthy thanks for having me Dave yeah you're very welcome so Associated Bank and regional bank Midwest to cover a lot of the territories not just Wisconsin but another number of other states around there retail commercial lending real estate offices stuff I think the largest bank in in Wisconsin but tell us a little bit about your business in your specific role sure yeah no it's a good intro we're definitely largest bank at Corvis concen and then we have branches in the in the Upper Midwest area so Minnesota Illinois Wisconsin our primary locations my role at associated I'm director data management so been with the bank a couple of years now and really just focused on defining our data strategy as an overall everything from data ingestion through consumption of data and analytics all the way through and then I'm also the data governance components and keeping the controls and the rails in place around all of our data in its usage so financial services obviously one of the more cutting-edge industries in terms of their use of technology not only are you good negotiators but you you often are early adopters you guys were on the Big Data bandwagon early a lot of financial services firms we're kind of early on in Hadoop but I wonder if you could tell us a little bit about sort of the business drivers and and where's the poor the pressure point that are informing your digital strategy your your data and data op strategy sure yeah I think that one of the key areas for us is that we're trying to shift from more of a reactive mode into more of a predictive prescriptive mode from a data and analytics perspective and using our data to infuse and drive more business decisions but also to infuse it in actual applications and customer experience etc so we have a wealth of data at our fingertips we're really focused on starting to build out that data link style strategy make sure that we're kind of ahead of the curve as far as trying to predict what our end users are going to need and some of the advanced use cases we're going to have before we even know that they actually exist right so it's really trying to prepare us for the future and what's next and and then abling and empowering the business to be able to pivot when we need to without having everything perfect that they prescribed and and ready for what if we could talk about a little bit about the data journey I know it's kind of a buzzword but in my career as a independent observer and analyst I've kind of watched the promise of whether it was decision support systems or enterprise data warehouse you know give that 360 degree view of the business the the real-time nature the the customer intimacy all that in and up until sort of the recent digital you know meme I feel as though the industry hasn't lived up to that promise so I wonder if you could take us through the journey and tell us sort of where you came from and where you are today and I really want to sort of understand some of the successes they've had sure no that's a that's a great point nice I feel like as an industry I think we're at a point now where the the people process technology have sort of all caught up to each other right I feel that that real-time streaming analytics the data service mentality just leveraging web services and API is more throughout our organization in our industry as a whole I feel like that's really starting to take shape right now and and all the pieces of that puzzle have come together so kind of where we started from a journey perspective it was it was very much if your your legacy reporting data warehouse mindset of tell me tell me the data elements that you think you're going to need we'll figure out how do we map those in and form them we'll figure out how to get those prepared for you and that whole lifecycle that waterfall mentality of how do we get this through the funnel and get it to users quality was usually there the the enablement was still there but it was missing that that rapid turnaround it was also missing the the what's next right than what you haven't thought of and almost to a point of just discouraging people from asking for too many things because it got too expensive it got too hard to maintain there was some difficulty in that space so some of the things that we're trying to do now is build that that enablement mentality of encouraging people to ask for everything so when we bring out new systems - the bank is no longer an option as far as how much data they're going to send to us right we're getting all of the data we're going to we're going to bring that all together for people and then really starting to figure out how can this data now be used and and we almost have to push that out and infuse it within our organization as opposed to waiting for it to be asked for so I think that all of the the concepts so that bringing that people process and then now the tools and capabilities together has really started to make a move for us and in the industry I mean it's really not an uncommon story right you had a traditional data warehouse system you had you know some experts that you had to go through to get the data the business kind of felt like it didn't own the data you know it felt like it was imposing every time it made a request or maybe it was frustrated because it took so long and then by the time they got the data perhaps you know the market had shifted so it create a lot of frustration and then to your point but but it became very useful as a reporting tool and that was kind of this the sweet spot so so how did you overcome that and you know get to where you are today and you know kind of where are you today I was gonna say I think we're still overcoming that we'll see it'll see how this all goes right I think there's there's a couple of things that you know we've started to enable first off is just having that a concept of scale and enablement mentality and everything that we do so when we bring systems on we bring on everything we're starting to have those those components and pieces in place and we're starting to build more framework base reusable processes and procedures so that every ask is not brand new it's not this reinvent the wheel and resolve for for all that work so I think that's helped if expedite our time to market and really get some of the buy-in and support from around the organization and it's really just finding the right use cases and finding the different business partners to work with and partner with so that you help them through their journey as well is there I'm there on a similar roadmap and journey for for their own life cycles as well in their product element or whatever business line there so from a process standpoint that you kind of have to jettison the you mentioned waterfall before and move to a more being an agile approach did it require different different skill sets talk about the process and the people side of yeah it's been a it's been a shift we've tried to shift more towards I wouldn't call us more formal agile I would say we're a little bit more lean from a an iterative backlog type of approach right so what are you putting that work together in queues and having the queue of B reprioritized working with the business owners to help through those things has been a key success criteria for us and how we start to manage that work as opposed to opening formal project requests and and having all that work have to funnel through some of the old channels that like you mentioned earlier kind of distracted a little bit from from the way things had been done in the past and added some layers that people felt potentially wouldn't be necessary if they thought it was a small ask in their eyes you know I think it also led to a lot of some of the data silos and and components that we have in place today in the industry and I don't think our company is alone and having data silos and components of data in different locations but those are there for a reason though those were there because they're they're filling a need that has been missing or a gap in the solution so what we're trying to do is really take that to heart and evaluate what can we do to enable those mindsets and those mentalities and find out what was the gap and why did they have to go get a siloed solution or work around operations and technology and the channels that had been in place what would you say well your biggest challenges in getting from point A to point B point B being where you are today there were challenges on each of the components of the pillar right so people process technology people are hard to change right men behavioral type changes has been difficult that there's components of that that definitely has been in place same with the process side right so so changing it into that backlog style mentality and working with the users and having more that be sort of that maintenance type support work is is a different call culture for our organization and traditional project management and then the tool sets right the the tools and capabilities we had to look in and evaluate what tools do we need to Mabel this behavior in this mentality how do we enable more self-service the exploration how do we get people the data that they need when they need it and empower them to use so maybe you could share with us some of the outcomes and I know it's yeah we're never done in this business but but thinking about you know the investments that you've made in intact people in reprocessing you know the time it takes to get leadership involved what has been so far anyway the business outcome and you share any any metrics or it is sort of subjective a guidance I yeah I think from a subjective perspective the some of the biggest things for us has just been our ability to to truly start to have that very 60 degree view of the customer which we're probably never going to get they're officially right there's there everyone's striving for that but the ability to have you know all of that data available kind of at our fingertips and have that all consolidated now into one one location one platform and start to be that hub that starts to redistribute that data to our applications and infusing that out has been a key component for us I think some of the other big kind of components are differentiators for us and value that we can show from an organizational perspective we're in an M&A mode right so we're always looking from a merger and acquisition perspective our the model that we've built out from a data strategy perspective has proven itself useful over and over now in that M&A mentality of how do you rapidly ingest new data sets it had understood get it distributed to the right consumers it's fit our model exactly and and it hasn't been an exception it's been just part of our overall framework for how we get that data and it wasn't anything new that we had to do different because it was M&A just timelines were probably a little bit more expedited the other thing that's been interesting in some of the world that were in now right from a a Kovach perspective and having a pivot and start to change some of the way we do business and some of the PPP loans and and our business models sort of had to change overnight and our ability to work with our different lines of business and get them the data they need to help drive those decisions was another scenario where had we not had the foundational components there in the platform there to do some of this if we would have spun a little bit longer so your data ops approach I'm gonna use that term helped you in this in this kovat situation I mean you had the PPE you had you know of slew of businesses looking to get access to that money you had uncertainty with regard to kind of what the rules of the game were what you was the bank you had a Judah cape but you it was really kind of opaque in terms of what you had to do the volume of loans had to go through the roof in the time frame it was like within days or weeks that you had to provide these so I wonder if we could talk about that a little bit and how you're sort of approach the data helped you be prepared for that yeah no it was a race I mean the bottom line was it felt like a race right from from industry perspective as far as how how could we get this out there soon enough fast enough provide the most value to our customers our applications teams did a phenomenal job on enabling the applications to help streamline some of the application process for the loans themselves but from a data and reporting perspective behind the scenes we were there and we had some tools and capabilities and readiness to say we have the data now in our in our lake we can start to do some business driven decisions around all all of the different components of what's being processed on a daily basis from an application perspective versus what's been funded and how do those start to funnel all the way through doing some data quality checks and operational reporting checks to make sure that that data move properly and got booked in in the proper ways because of the rapid nature of how that was was all being done other covent type use cases as well we had some some different scenarios around different feed reporting and and other capabilities that the business wasn't necessarily prepared for we wouldn't have planned to have some of these types of things and reporting in place that we were able to give it because we had access to all the data because of these frameworks that we had put into place that we could pretty rapidly start to turn around some of those data some of those data points and analytics for us to make some some better decisions so given the propensity in the pace of M&A there has to be a challenge fundamentally in just in terms of data quality consistency governance give us the before and after you know before kind of before being the before the data ops mindset and after being kind of where you are today I think that's still a journey we're always trying to get better on that as well but the data ops mindset for us really has has shifted us to start to think about automation right pipelines that enablement a constant improvement and and how do we deploy faster deploy more consistently and and have the right capabilities in place when we need it so you know where some of that has come into place from an M&A perspective is it's really been around the building scale into everything that we do dezq real-time nature this scalability the rapid deployment models that we have in place is really where that starts to join forces and really become become powerful having having the ability to rapidly ingesting new data sources whether we know about it or not and then exposing that and having the tools and platforms be able to expose that to our users and enable our business lines whether it's covent whether it's M&A the use cases keep coming up right they we keep running into the same same concept which is how rapidly get people the data they need when they need it but still provide the rails and controls and make sure that it's governed and controllable on the way as well [Music] about the tech though wonder if we could spend some time on that I mean can you paint a picture of us so I thought what what what we're looking at here you've got you know some traditional IDI w's involved I'm sure you've got lots of data sources you you may be one of the zookeepers from the the Hadoop days with a lot of you know experimentation there may be some machine intelligence and they are painting a pic before us but sure no so we're kind of evolving some of the tool sets and capabilities as well we have some some generic kind of custom in-house build ingestion frameworks that we've started to build out for how to rapidly ingest and kind of script out the nature of of how we bring those data sources into play what we're what we've now started as well as is a journey down IBM compact product which is really gonna it's providing us that ability to govern and control all of our data sources and then start to enable some of that real-time ad hoc analytics and data preparation data shaping so some of the components that we're doing in there is just around that data discovery pointing that data sources rapidly running data profiles exposing that data to our users obviously very handy in the emanating space and and anytime you get new data sources in but then the concept of publishing that and leveraging some of the AI capabilities of assigning business terms in the data glossary and those components is another key component for us on the on the consumption side of the house for for data we have a couple of tools in place where Cognos shop we do a tableau from a data visualization perspective as well that what that were we're leveraging but that's where cloud pack is now starting to come into play as well from a data refinement perspective and giving the ability for users to actually go start to shape and prep their data sets all within that governed concept and then we've actually now started down the enablement path from an AI perspective with Python and R and we're using compact to be our orchestration tool to keep all that governed and controlled as well enable some some new AI models and some new technologies in that space we're actually starting to convert all of our custom-built frameworks into python now as well so we start to have some of that embedded within cloud pack and we can start to use some of the rails of those frameworks with it within them okay so you've got the ingest and ingestion side you've done a lot of automation it sounds like called the data profiling that's maybe what classification and automating that piece and then you've got the data quality piece the governance you got visualization with with tableau and and this kind of all fits together in a in an open quote unquote open framework is that right yeah I exactly I mean the the framework itself from our perspective where we're trying to keep the tools as as consistent as we can we really want to enable our users to have the tools that they need in the toolbox and and keep all that open what we're trying to focus on is making sure that they get the same data the same experience through whatever tool and mechanism that they're consuming from so that's where that platform mentality comes into place having compact in the middle to help govern all that and and reprovision some of those data sources out for us has it has been a key component for us well see if it sounds like you're you know making a lot of progress or you know so the days of the data temple or the high priest of data or the sort of keepers of that data really to more of a data culture where the businesses kind of feel ownership for their own data you believe self-service I think you've got confidence much more confident than the in the compliance and governance piece but bring us home just in terms of that notion of data culture and where you are and where you're headed no definitely I think that's that's been a key for us too as as part of our strategy is really helping we put in a strategy that helps define and dictate some of those structures and ownership and make that more clear some of the of the failures of the past if you will from an overall my monster data warehouse was around nobody ever owned it there was there wasn't you always ran that that risk of either the loudest consumer actually owned it or no one actually owned it what we've started to do with this is that Lake mentality and and having all that data ingested into our our frameworks the data owners are clear-cut it's who sends that data in what is the book record system for that source data we don't want a ability we don't touch it we don't transform it as we load it it sits there and available you own it we're doing the same mentality on the consumer side so we have we have a series of structures from a consumption perspective that all of our users are consuming our data if it's represented exactly how they want to consume it so again that ownership we're trying to take out a lot of that gray area and I'm enabling them to say yeah I own this I understand what I'm what I'm going after and and I can put the the ownership and the rule and rules and the stewardship around that as opposed to having that gray model in the middle that that that we never we never get but I guess to kind of close it out really the the concept for us is enabling people and end-users right giving them the data that they need when they need it and it's it's really about providing the framework and then the rails around around doing that and it's not about building out a formal bill warehouse model or a formal lessor like you mentioned before some of the you know the ivory tower type concepts right it's really about purpose-built data sets getting the giving our users empowered with the data they need when they need it all the way through and fusing that into our applications so that the applications and provide the best user experiences and and use the data to our advantage all about enabling the business I got a shove all I have you how's that IBM doing you know as a as a partner what do you like what could they be doing better to make your life easier sure I think I think they've been a great partner for us as far as that that enablement mentality the cloud pack platform has been a key for us we wouldn't be where we are without that tool said I our journey originally when we started looking at tools and modernization of our staff was around data quality data governance type components and tools we now because of the platform have released our first Python I models into the environment we have our studio capabilities natively because of the way that that's all container is now within cloud back so we've been able to enable new use cases and really advance us where we would have a time or a lot a lot more technologies and capabilities and then integrate those ourselves so the ability to have that all done has or and be able to leverage that platform has been a key to helping us get some of these roles out of this as quickly as we have as far as a partnership perspective they've been great as far as listening to what what the next steps are for us where we're headed what can we what do we need more of what can they do to help us get there so it's it's really been an encouraging encouraging environment I think they as far as what can they do better I think it's just keep keep delivering write it delivery is ping so keep keep releasing the new functionality and features and keeping the quality of the product intact well see it was great having you on the cube we always love to get the practitioner angle sounds like you've made a lot of progress and as I said when we're never finished in this industry so best of luck to you stay safe then and thanks so much for for sharing appreciate it thank you all right and thank you for watching everybody this is Dave Volante for the cube data ops in action we got the crowd chat a little bit later get right there but right back right of this short break [Music] [Music]

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
WisconsinLOCATION

0.99+

Dave VolantePERSON

0.99+

Associated BankORGANIZATION

0.99+

DavePERSON

0.99+

Steve LucasPERSON

0.99+

Steven LueckPERSON

0.99+

pythonTITLE

0.99+

IBMORGANIZATION

0.99+

360 degreeQUANTITY

0.99+

MinnesotaLOCATION

0.99+

Palo AltoLOCATION

0.99+

60 degreeQUANTITY

0.99+

PythonTITLE

0.99+

todayDATE

0.98+

BostonLOCATION

0.98+

firstQUANTITY

0.97+

eachQUANTITY

0.95+

ActonORGANIZATION

0.94+

CognosORGANIZATION

0.94+

M&ATITLE

0.92+

one platformQUANTITY

0.91+

oneQUANTITY

0.9+

Corvis concenORGANIZATION

0.87+

MidwestLOCATION

0.87+

RTITLE

0.86+

Upper MidwestLOCATION

0.83+

IBM DataOps in ActionORGANIZATION

0.81+

one locationQUANTITY

0.79+

agileTITLE

0.78+

a couple of yearsQUANTITY

0.75+

M&AORGANIZATION

0.7+

point BOTHER

0.69+

Illinois WisconsinLOCATION

0.68+

couple of toolsQUANTITY

0.67+

pointOTHER

0.52+

couple of thingsQUANTITY

0.5+

JudahPERSON

0.31+

HadoopLOCATION

0.28+

Indranil Chakraborty, Google Cloud | Google Cloud Next 2018


 

>> Live from San Francisco, it's theCUBE covering Google Cloud Next 2018. Brought to you by Google Cloud and it's ecosystem partners. >> Welcome back everyone. This is theCUBE live coverage of Google Cloud Next '18 in San Francisco. I'm John Furrier with Jeff Frick. We're at day three of three days of wall-to-wall coverage. Go to SiliconANGLE dot com on theCUBE dot net. Check out the on demand videos and the Cloud series special journalism report that we have out there, tons of articles, tons of coverage of Google Next with the news, analysis and opinion, of course, SiliconANGLE. Our next guest is Indranil Chakraborty, Project Manager for IoT Google Cloud. Certainly IoT part of the network part of the Cloud, one of the hottest areas in Cloud is IoT. We've been seeing that. Welcome to theCUBE. >> Thank you. >> Thanks for joining us. IoT is certainly the intersection of a lot of things: Cloud, data center, A.I., soon to be, you know, cryptocurrency and blockchain coming down, not for you guys, but in general those are the big hottest areas. >> IOT is not like, you can't say it's an IoT category, so IoT has to kind of sit in the intersection of a lot of different markets that are kind of pure playing. >> So I first want you to explain to the folks out there watching, what is the Google IoT philosophy? What is the products trying to do? And what are guys announcing here? >> Absolutely. Thanks for having me here, it's really great to be here. And if you think about IoT, and if you think about what we have on Google Cloud, we already have a great set of service for data storage, processing, and machine intelligence. Right, so we have Cloud Machine Learning Engine, we have an on start ML. So most of those data processing and intelligence services are already there. What we announced last year was Cloud IoT Core, which is our fully-managed service for our customers and partners who easily and securely connect their IoT devices to Google Cloud, so they can start transmitting data and then ingest and store in the user downstream services for analysis and machine intelligence. >> I mean, IoT is a great use case of Cloud because one, Cloud shows that you can be incented to collect data. >> Right. >> Cuz now you have the lower cost storage, You've got machine learning, all these things are going on. It's great. >> Exactly. >> But Iot is now the Edge of the network. You've got sensors. You've got cars, like Teslas, people can relate to. So everything's coming online has, not just an IP connection, anything that's a sensor. The IoT's been just evolving. What is the Edge to you guys? What does that mean when I say IoT Edge? What is Google view of the Edge? >> Yeah absolutely, it's a great question. You know, we identified early on the emergent trend of moving compute and intelligence to the edge and close to the device itself. So this week, as you already know, we've announced two products for Edge. One is Cloud IoT Edge, which is a software stack which can run on your gateway device, cameras, or any connected device that has some compute capabilities, which extends that powerful AI and machine learning capabilities of Google Cloud to your Edge device. And we also announced Edge TPU, which is a Google designed high performing chip for to run machine learning inference on the Edge device itself. And so with the combination of Cloud IoT Edge as a software stack and with our Edge TPU, we think we have an integrated machine learning solution for on Google Cloud platform. >> How does that get rolled out? So the chip, I'm assuming, you're doing OEM or deals with manufacturers. Same with the software stack. Is the software stack portable? Explain how you roll those out. >> Yeah, you know we are big into working with our ecosystem and we really want to build a robust part of ecosystem. So we are working with semiconductor companies, such as NXP and Arm, who will build a system-on-module using our Google Edge TPU, which can then be used by gateway device makers. So we have partnership with Harting, Nokia, NEXCOM. We're going to take those SOM, add it to their gateway devices, so to take it to the market. We're also working with a lot of computing companies, such as ADLINK, Acton, and a couple of others, Olya. So they can build an analytic solution using our Cloud IoT Edge software and Edge TPU to combine with the rest of Cloud IoT platform. So we're pretty excited about the partners. >> But every coin has two sides, right? So the kind of knock on the Edge is, now you're attack surface on the security side is growing exponentially. So clearly, security is an important part of what you guys do. And now this is kind of a different challenge when you're now, your point to presence is not like our point to presence, but are going to expand exponentially to all these connected autonomous devices. >> Yep, that's a great point. And you know, we take security very seriously. In fact, last year when we announced Cloud IoT Core, we reject any connection that doesn't use TLS, number one, right? And number two, we individually authenticate each and every device using an asymmetry keypad. In addition to that, we've also announced partnership with Microchip. So Microchip has built this microcontroller crypto, which can have the private key inside the crypto, and we use JWT token that was signed by inside the chip itself. So your private key never leaves the chip at all. So that's one additional reinforcement for security. So we have end to end security. We make sure that the devices are connecting over TLS, but we also have hardware root of trust on the Edge device as well. >> The token model is interesting. Talk about blockchain because you know, David Floy on our analyst team, he and I are constantly riffing on that. IoT actually is interesting use case for blockchain and potentially token economics. How do you guys view that? I know that you just mentioned that this is kind of a thing there. Does it fit in your vision at all? What's your position on how that would work out? >> You know, we are closely looking at the blockchain technology. As of today, we don't have anything specific to announce in terms of a product perspective, but we do have, we do use JSON web token, which is standard on the web, use to sign those using our private keys. So that works beautifully, but we're closely monitoring and looking at it. We don't have anything to announce today. >> Not yet, but they're going to share that. Their research is working on it, interesting scenario. So in general, benefits to customers who're working with IoT, your team, cuz you have the core, you have the chip, you have the software stack. There's always an architectural discussion depending upon the environment. Do you move the compute to the data? Do you move the data to the Cloud? What's the role of data in all this cuz certainly you got the processing power. What's the architectural framework and benefits to the customers who are working with Google. >> Yeah, so let's make a specific example, LG CNS. They want to improve their productivity in the factory, and what they've done is they've built a machine learning model to detect defects on their assembly line using Cloud machine learning engine. And they've used this one engineer a couple of weeks and they would train the model on Cloud. Now with Cloud IoT Edge and the Edge TPU, they can run that train model locally on the camera itself, so they can do realtime defect analysis at a pretty fast moving assembly line. So that's the model which we are working on where you use Cloud for high compute for training, but you use the Edge TPU and the Cloud IoT Edge for local inference for real time detection as well. >> How do you guys look at the IoT market because depending on how you're looking at it, you can look at smart cities, you can look at self-driving cars? There's a huge aperture of different use cases. It could be humans with devices, also you guys have Android, so it's kind of a broad scope. You guys got to kind of have that core tech, which it sounds like you're putting in the center of all this. How do you guys look at that? How do you guys organize around that? I think Ann Green mentioned verticals, for instance, is there different verticals? I mean, how do you guys go at that mark with the product? >> IoT is a nation market. And what we offer as Google Cloud, is a horizontal platform, what we call it is Cloud IoT platform, which has got Cloud IoT core on the Cloud side, Cloud IoT Edge, the Edge TPU. And we really want to work with our partners our solution integrators and ISVs, to help build those vertical applications. And so we're working with partners on the healthcare side, manufacturing. We have Odin Technology as one of the partner to really build this vertical up. >> You guys are not going to be dogmatic, this is how our IoT sleeve. You're going to let a thousand flowers bloom kind of philosophy. Put it out there, connect, and let the innovation happen with the ecosystem. >> Yeah, we really believe in driving, moving the, having robust ecosystem. So we want to provide a horizontal platform, which really makes it easy for partners and customers to build vertical solutions. >> Another kind of unique IoT challenge, which you didn't have in the past, we've all seen great pictures of the inside of Google Data Centers. They're beautiful and tight and lots of pretty pictures, very different than out in a minefield or a lot of these challenging IT environments where power could be a challenge. The weather could be a challenge. Connectivity to the internet could be a challenge. Obviously, and then you need to power them. When you talk about how much store do you have locally, how much compute do you have locally. So as you look at that landscape, how has that shaped your guys' views? What are some of the unique challenges that you guys have faced? And how are you overcoming some of those? >> Yeah, that's a great question and this is one of the primary reasons why we announced Cloud IoT Edge, which is software stack, and Edge TPU. So that for use cases where you have limited connectivity, oil wells or farm field, windmills. Connectivity is limited, and you cannot rely on connectivity for reliable operations. But you can use Cloud IoT Edge with our partner device ecosystem to run some of the compute locally. You can store data locally. You can analyze locally, and then push some of the incremental data to the Cloud to further update your model in the Cloud. So that's how we were thinking about this. We have to have some compute locally for those reasons. >> Release the hard coupling, if you will. So it's really got to be a dynamic coupling based on the situation, based on the timing, maybe. >> Exactly. >> Schedule updates, and these type of things. So it's not just connected. >> Exactly. It doesn't need to be continuously connected, right? As long as there's enough connectivity to download some of the updated model, to download the latest firmware and the software. You can run local compute and local machine learning inference on the Edge itself. That's the model we're looking at. So you can train in Cloud, push down the updates to the Edge device, and you can run local compute and intelligence on the device itself. >> A lot of conscious we've been having lately has been about, how do you manage the Edge, has been an area of discussion. Why I want to have a multi-threaded computer, basically, on a device that could be attacked with malware, putting bounds around certain things. You need the IP there. You want to have as much compute, obviously, we'd agree. But there's going to be policies you're starting to think about. This is where I think it gets interesting when you look at what's going on at the abstractions up the stack that you guys are doing. How does that kind of thinking impact some rollouts of IoT because I'm looking to imagine that you won't have policies. Some might trickle data back. It might not be data intensive. Some might want more security. Containers, all this kind of tying in. Is that right? Am I getting that right? How do you see that happening? >> So when you think about Edge, there are different layers. There are different tiers. There are the gateway class devices, which has high compute, and all the way to sensors. Our focus really is on the Edge devices, which has some decent compute capabilities and you can scale up to high-end devices as well. And when you think about policies, on the Cloud side, we have IM policies, so you can define roles, and you can define policies, based on which you can decide which devices should get what software or which user should get access to particular data types as well. So we have the infrastructure already, and we're leveraging that for the IoT platform. >> Yeah, and automate a lot of those kind of activities as well. >> Exactly. >> Alright, so I got to ask you about the show. What's some of the cool things you're seeing, for the folks that couldn't make it that are watching this video live and on demand. What's happening here at Google? What's the phenomenon Google Cloud? What are some of the hot stories? What's the vibe? What are the cool things that you are seeing? >> Absolutely. So I'm biased, so I'm going to start with IoT. You know, we have an IoT showcase where we have a pedestal where we're showing the Edge TPU and the Edge TPU board as well. And there is a lot of work which is happening there. There's a maintenance team there as well, so I would highly encourage attendees to go check it out. >> What are people saying about that? The demos and the sessions, what are some of the feedback? Share some color commentary around reactions. >> Yeah, we've been getting a lot of positive reactions. In fact, we just had a couple of breakout sessions, and a lot of interest from partners across the board to engage with us. So we are pretty excited with our announcement on the Edge side. The whole orchestration of training model in the Cloud and then pushing it down and then sending updates, that's where it really makes it easy for a lot of the partners. So they're excited about it as well. >> They're going to make some good money with it too. You guys are making the mark, and not trying to go too far. Laying the foundational work, the horizontal scale. >> Yes, exactly. And we really focused, for the Edge TPU, we really focused on performance per dollar and performance per watt. And so that has been what we are striving to really have high performance for lower cost. So that's what we're targeting. And a couple of other things, the whole server-less capabilities, and the fact that Cloud functions have become GA, is pretty exciting. And Cloud IoT Core is also a fully managed server-less architecture in a machine. The AI and auto ML which we announced with NLP and text and speech is pretty exciting as well. And that works very well with some of our IoT use cases as well. So I think those are a couple of announcements, which I'm pretty excited about. >> Yeah, I think the automation theme too, really resonated well on all that. Cuz what comes out of that is, humans still got to be more proficient in doing the new stuff, but also they got to run this. And you've got developers enough to build apps that drives value, so you got the value development with the applications, and then also the operational side, which is, I don't want to say becoming generic, but it's not specialized as used to be. Network operator, this guys does this, this gal does that. I mean, it used to be very stove piped. Now it's much more of a how do you run the environment? >> Exactly, and to your point, even on the IoT space, it's also very relevant. I mean there are a lot of overlaps between what used to be just devops and OTE and IT. There are a lot of overlaps there. And so we're looking at it closely as well to make sure that we can really simplify the overall requirement and the tooling which is needed for building an IoT solution. >> For the people that are not following Google as closely as say we are, for instance, they're not inside the ropes, inside the baseball, if you will, in the industry. See Google Cloud, they know Google as Gmail, search, et cetera. They look a couple years ago, Google Cloud had app engine, the OG of Google Cloud, as it's called. What would you say to the folks now that are watching? What's different about Google Cloud now, and what should they know about Google Cloud that they may not know about. What would you say to that person? >> Absolutely, and the first thing is we are very serious about enterprise. You can see here the number of attendees who have come here and how we have multiple buildings where we organized the conference. We're very serious over enterprise. Second, back in the days, two years back, we were really focused on building products, which works for specific use cases. We didn't think about end to end solution, but now the focus has changed. And we're really thinking about, we always had the technology with packaging the products, and now we're thinking about providing end to end solutions, the framework where for a business user, enterprise user, they can just take the solution, and they know it will work. Alright, so there's been a lot of focus on that. And our key differentiator is about machine intelligence and AI, right? That's where Google thrives. We've been spending a lot of time on it, and now we're focused on democratizing AI. Not just on the Cloud, but also on the Edge with the announcement of HTPU. >> And I really think you guys have done a good job with the mindset of making it consumable. In an end to end framework with the option. We've got Kubernetes, and Container's been around for a while, but it's working with multiple environments. I think that is a real mindset shift. >> Exactly. >> So congratulations. >> Thank you. >> Thanks for coming on, appreciate it. >> Absolutely, was great having you guys. >> Google IoT, just plug into the Google Cloud. It'll suck all your data in. Give you some compute at the Edge. Open it up to partners, really focusing on the ecosystem and enabling new types of functionality. It's theCUBE, bringing you the data here on day three at Google Cloud Next '18. We'll be right back with more coverage. Stay with us after this short break. (modern music)

Published Date : Jul 26 2018

SUMMARY :

Brought to you by Google Cloud and the Cloud series special journalism report soon to be, you know, so IoT has to kind of sit in the intersection and if you think about what we have on Google Cloud, Cloud shows that you can be incented to collect data. Cuz now you have the lower cost storage, What is the Edge to you guys? on the Edge device itself. So the chip, I'm assuming, and Edge TPU to combine with the rest of Cloud IoT platform. So the kind of knock on the Edge is, on the Edge device as well. I know that you just mentioned that the blockchain technology. and benefits to the customers who are working with Google. So that's the model which we are working on How do you guys look at the IoT market on the healthcare side, manufacturing. and let the innovation happen with the ecosystem. and customers to build vertical solutions. Obviously, and then you need to power them. So that for use cases where you have limited connectivity, Release the hard coupling, if you will. So it's not just connected. and local machine learning inference on the Edge itself. that you guys are doing. based on which you can decide Yeah, and automate a lot of those kind of activities What are the cool things that you are seeing? So I'm biased, so I'm going to start with IoT. The demos and the sessions, and a lot of interest from partners across the board You guys are making the mark, and the fact that Cloud functions Now it's much more of a how do you run the environment? Exactly, and to your point, What would you say to the folks now that are watching? Absolutely, and the first thing is And I really think you guys have done It's theCUBE, bringing you the data

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyPERSON

0.99+

NXPORGANIZATION

0.99+

ADLINKORGANIZATION

0.99+

NEXCOMORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

MicrochipORGANIZATION

0.99+

Indranil ChakrabortyPERSON

0.99+

ActonORGANIZATION

0.99+

HartingORGANIZATION

0.99+

Ann GreenPERSON

0.99+

two sidesQUANTITY

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

OlyaORGANIZATION

0.99+

last yearDATE

0.99+

San FranciscoLOCATION

0.99+

AndroidTITLE

0.99+

NokiaORGANIZATION

0.99+

three daysQUANTITY

0.99+

todayDATE

0.99+

two productsQUANTITY

0.99+

Google CloudTITLE

0.99+

SecondQUANTITY

0.99+

OneQUANTITY

0.99+

this weekDATE

0.98+

oneQUANTITY

0.98+

Odin TechnologyORGANIZATION

0.98+

ArmORGANIZATION

0.98+

Cloud IoT EdgeTITLE

0.98+

two years backDATE

0.98+

firstQUANTITY

0.97+

EdgeTITLE

0.97+

SiliconANGLEORGANIZATION

0.97+

CloudTITLE

0.97+

Google NextTITLE

0.96+

TeslasORGANIZATION

0.96+

eachQUANTITY

0.96+

GmailTITLE

0.96+

first thingQUANTITY

0.95+

Edge TPUTITLE

0.95+

couple years agoDATE

0.95+

theCUBEORGANIZATION

0.93+

ContainerORGANIZATION

0.92+

Google Data CentersORGANIZATION

0.92+

LG CNSORGANIZATION

0.91+

day threeQUANTITY

0.91+

Cloud IoTTITLE

0.9+

SiliconANGLE dot comORGANIZATION

0.89+

every deviceQUANTITY

0.87+

tons of articlesQUANTITY

0.85+

EdgeCOMMERCIAL_ITEM

0.82+

2018DATE

0.8+

tonsQUANTITY

0.77+